URI: 
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   It's insulting to read AI-generated blog posts
       
       
        blackhaj7 wrote 2 days ago:
        I used to be really good at spotting it but my AI radar is getting less
        effective now. Often I don’t notice until late in an article which is
        very annoying.
        
        There are some videos on Instagram that I didn’t notice were AI until
        my wife told me!
        
        If I want AI content, I will go to an AI. The only good outcome is that
        I am spending way less time on social media because so much of it is
        now AI
       
        jdlyga wrote 2 days ago:
        If you use AI to write a blog post, I'll use AI to summarize it.
       
        dankobgd wrote 2 days ago:
        Normal websites are not even findable now
       
        jibal wrote 2 days ago:
        I see no evidence or logical argument here. You have feelings ... fine.
        Mine are different.
       
        akst wrote 2 days ago:
        With the exception of using AI to proof read, I agree.
        
        In terms of proof reading, I just mean proof reading, not rewriting
        anything. especially not using the output verbatim for suggested fixes.
        And the author should ensure they retain their writing style & be
        assertive with their discretion on what corrections they should make.
       
        jeromie wrote 2 days ago:
        I had to go take a two-hour walk to calm down after my boss sent me a
        project proposal that was just ChatGPT gobbledygook.
       
        invisibleink wrote 2 days ago:
        YES and, also let's not use printing press, photography, word
        processors, spell checkers, internet and search engines because they
        lack human touch, make us lazy, prevent deep thinking blah blah...
       
          philipwhiuk wrote 2 days ago:
          Just because all those other inventions didn't wreck humanity doesn't
          mean this one won't.
       
            invisibleink wrote 2 days ago:
            The point is, every new technology attracts its share of romantic
            skeptics, and every time they fail then they retreat to the same
            tired line:
            
            "Just because all those other inventions didn't wreck humanity
            doesn't mean this one won't"
            
            But that’s not an argument, it’s an evasion.
            
            Given past inventions didn’t destroy us despite similar concerns,
            then the burden is on you to show why this one is fundamentally
            different and uniquely catastrophic.
       
        yalogin wrote 3 days ago:
        This is unavoidable. Individual blogs may not use AI but companies that
        live on user engagement will absolutely use them and churn out all
        types of content
       
          fullshark wrote 3 days ago:
          I avoid it by not reading open web blogs.  Eventually open web
          message boards (like this one) will be fully contaminated as well and
          I'll move to discord or group chats I suppose.
       
        sinuhe69 wrote 3 days ago:
        DeepL has the option “Correct only” and it can become quite handy
        for non-native speakers.
       
        K0balt wrote 3 days ago:
        It wouldn’t be so bad if it wasn’t unbearable to read.
       
        dustypotato wrote 3 days ago:
        By all means , use AI. Just don't make it longer than it needs to be
       
        gngoo wrote 3 days ago:
        I’m very happy that I can post ai generated blog posts from my
        writing. And I’m now averaging 500 unique daily visitors and quite
        some repeat visits and subscribers with it too. If it wasn’t for AI,
        then I’d go back to where it was before AI… 10 visitors per month?
        I don’t like writing, so I collaborate with AI to write entire blog
        posts. I don’t have AI “refine it”, I usually tell AI to take
        what I’m rambling about for 1000 words and rewrite it in my own
        style, cadence, rhythm and vibe. So I can generate 3-5 blog posts per
        week. Which surprisingly rank well, get posted on LinkedIn, Twitter and
        Reddit by others. So the amount of people that enjoy reading
        AI-generated blog posts likely is starting to outpace those who don’t
        at this rate.
       
        bn-l wrote 3 days ago:
        LLM generated emails are probably the worst
       
        vesterthacker wrote 3 days ago:
        When you criticize, it helps to understand the other’s perspective.
        
        I suppose I am writing to you because I can no longer speak to anyone.
        As people turn to technology for their every word, the space between
        them widens, and I am no exception. Everyone speaks, yet no one
        listens. The noise fills the room, and still it feels empty.
        
        Parents grow indifferent, and their children learn it before they can
        name it. A sickness spreads, quiet and unseen, softening every heart it
        touches. I once believed I was different. I told myself I still
        remembered love, that I still felt warmth somewhere inside. But perhaps
        I only remember the idea of it. Perhaps feeling itself has gone.
        
        I used to judge the new writers for chasing meaning in words. I thought
        they wrote out of vanity. Now I see they are only trying to feel
        something, anything at all. I watch them, and sometimes I envy them,
        though I pretend not to. They are lost, yes, but they still search. I
        no longer do.
        
        The world is cold, and I have grown used to it. I write to remember,
        but the words answer nothing. They fall silent, as if ashamed. Maybe
        you understand. Maybe it is the same with you.
        
        Maybe writing coldly is simply compassion, a way of not letting others
        feel your pain.
       
        Wowfunhappy wrote 3 days ago:
        I agree with everything except this part:
        
        > No, don't use it to fix your grammar
        
        How is this substantially different from using spellcheck? 
        I don't see any problem with asking an LLM to check for and fix
        grammatical errors.
       
        tdiff wrote 3 days ago:
        > Here is a secret: most people want to help you succeed.
        
        Most people dont care.
       
        foxfired wrote 3 days ago:
        Earlier this year, I used AI to help me improve some of my writing on
        my blog. It just has a better way of phrasing ideas than me. But when I
        came back to read those same blog posts a couple months later, you know
        after I've encountered a lot more blog posts that I didn't know were AI
        generated at the time, I saw the pattern. It sounds like the exact same
        author, +- some degree of obligatory humor, writing all over the web
        with the same voice.
        
        I've found a better approach to using AI for writing. First, if I don't
        bother writing it, why should you bother reading it? LLMs can be great
        soundboards. Treat them as teachers, not assistants. Your teacher is
        not gonna write your essay for you, but he will teach you how to write,
        and spot the parts that need clarification. I will share my process in
        the coming days, hopefully it will get some traction.
       
        pasteldream wrote 3 days ago:
        > people are far kinder than you may think
        
        Not everyone has this same experience of the world. People are harsh,
        and how much grace they give you has more to do with who you are than
        what you say.
        
        That aside, the worst problem with LLM-generated text isn’t that
        it’s less human, it’s that (by default) it’s full of filler,
        including excessive repetition and contrived analogies.
       
          zenel wrote 3 days ago:
          > Not everyone has this same experience of the world. People are
          harsh, and how much grace they give you has more to do with who you
          are than what you say.
          
          You okay friend?
       
            pasteldream wrote 3 days ago:
            Yes.
       
        LeoPanthera wrote 3 days ago:
        Anyone can make AI generated content. It requires no effort at all.
        
        Therefore, if I or anyone else wanted to see it, I would simply do it
        myself.
        
        I don't know why so many people can't grasp that.
       
        corporat wrote 3 days ago:
        The most thoughtful critique of this post isn’t that AI is inherently
        bad—but that its use shouldn’t be conflated with laziness or
        cowardice.
        
        Fact: Professional writers have used grammar tools, style guides, and
        even assistants for decades. AI simply automates some of these
        functions faster. Would we say Hemingway was lazy for using a
        typewriter? No—we’d say he leveraged tools.
        
        AI doesn’t create thoughts; it drafts ideas. The writer still
        curates, edits, and imbues meaning—just like a journalist editing a
        reporter’s notes or a designer refining Photoshop output. Tools
        don’t diminish creativity—they democratize access to it.
        
        That said: if you’re outsourcing your thinking to AI (e.g., asking an
        LLM to write your thesis without engaging), then yes, you’ve lost
        something. But complaining about AI itself misunderstands the problem.
        
        TL;DR: Typewriters spit out prose too—but no one blames writers for
        using them.
       
          rideontime wrote 3 days ago:
          For transparency, what role did AI serve in drafting this comment?
       
            corporat wrote 3 days ago:
            AI was used to analyze logical fallacies in the original blog post.
            I didn’t use it to draft content—just to spot the straw man,
            false dilemma, and appeal-to-emotion tactics in real time.
            
            Ironically, this exact request would’ve fit the blog’s own
            arguments: "AI is lazy" / "AI undermines thought." But since I was
            using AI as a diagnostic tool (not a creative one), it doesn’t
            count.
            
            Self-referential irony? Maybe. But at least I’m being
            transparent. :)
       
              rideontime wrote 2 days ago:
              I'd merely noticed that your comment mimicked the writing style
              of popular LLMs. Guessing you spend a lot of time with them?
       
              philipwhiuk wrote 2 days ago:
              [flagged]
       
        OptionOfT wrote 3 days ago:
        What am I even reading if it is AI generated?
        
        The reason AI is so hyped up at the moment is that you give it little,
        it gives you back more.
        
        But then whose blog-post am I reading? What really is the point?
       
        gr4vityWall wrote 3 days ago:
        Is it just me, or is OP posting a bunch of links on HN for karma
        farming? Some of seem to be AI-generated, like this one:
        
  HTML  [1]: https://news.ycombinator.com/item?id=45724022
       
        throwawa14223 wrote 3 days ago:
        I should never spend more effort reading something than the author
        spent writing it. With AI-generated texts the author effort approaches
        zero.
       
        hereme888 wrote 3 days ago:
        You are absolutely right!
        
        Jokes aside, good article.
       
        tasuki wrote 3 days ago:
        > It seems so rude and careless to make me, a person with thoughts,
        ideas, humor, contradictions and life experience to read something spit
        out by the equivalent of a lexical bingo machine because you were too
        lazy to write it yourself.
        
        Agreed fully. In fact it'd be quite rude to force you to even read
        something written by another human being!
        
        I'm all for your right to decide what is and isn't worth reading, be it
        ai or human generated.
       
        johanam wrote 3 days ago:
        AI generated text like a plume of pollution spreading through the web.
        Little we can do to keep it at bay. Perhaps transparency is the answer?
       
        madcaptenor wrote 3 days ago:
        ai;dr
       
        voidhorse wrote 3 days ago:
        If you struggle with communication, using AI is fine. What matters is
        caring about the result. You cannot just throw it over the fence.
        
        AI content in itself isn't insulting, but as TFA hits upon, pushing
        sloppy work you didn't bother to read or check at all yourself is
        incredibly insulting and just communicates to others that you don't
        think their time is valuable. This holds for non-AI generated work as
        well, but the bar is higher by default since you at least had to
        generate that content yourself and thus at least engage with it on a
        basic level. AI content is also needlessly verbose, employs trite and
        stupid analogies constantly, and in general has the nauseating, bland,
        soulless corporate professional communication style that anyone with
        even a mote of decent literary taste detests.
       
        akshatjiwan wrote 3 days ago:
        I don't know. Content matters more to me. Many of the articles that I
        read have so little information density that I find it hard to justify
        spending time on them.I often use AI to summarise text for me and then
        lookup particular topics in detail if I like.
        
        Skimming was pretty common before AI too. People used to read and share
        notes instead of entire texts. AI has just made it easier.
        
        Reading long texts is not a problem for me if its engaging. But often I
        find they just go on and on without getting to the point. Especially
        news articles.They are the worst.
       
        AnimalMuppet wrote 3 days ago:
        I mean, if you used an AI to generated it, you shouldn't mind if my AI
        reads it, rather than me.
       
        KindDragon wrote 3 days ago:
        > Everyone wants to help each other. And people are far kinder than you
        may think.
        
        I want to believe that. When I was a student, I built a simple HTML
        page with a feedback form that emailed me submissions. I received
        exactly one message. It arrived encoded; I eagerly decoded it and found
        a profanity-filled rant about how terrible my site was. That taught me
        that kindness online isn’t the default - it’s a choice. I still aim
        for it, but I don’t assume it.
       
          netule wrote 3 days ago:
          I’ve found that the kinds of people who leave comments or send
          emails tend to fall into two categories:
          
          1. They’re assholes.
          
          2. They care enough to speak up, but only when the thing stops
          working as expected.
          
          I think the vast majority of users/readers are good people who just
          don’t feel like engaging. The minority are vocal assholes.
       
        RIMR wrote 3 days ago:
        >No, don't use it to fix your grammar, or for translations
        
        Okay, I can understand even drawing the line at grammar correction, in
        that not all "correct" grammar is desirable or personal enough to
        convey certain ideas.
        
        But not for translation? AI translation, in my experience, has proven
        to be more reliable than other forms of machine translation, and
        personally learning a new language every time I need to read something
        non-native to me isn't reasonable.
       
        deadbabe wrote 3 days ago:
        If you’re going to AI generate your blog, the least you could do is
        use a fine tuned LLM that matches your style. Most people just toss a
        prompt into GPT 5 and call it a day.
       
        nazgu1 wrote 3 days ago:
        I agree, but if I would have to type one most insulting things with AI
        is scraping data without consent to train models, so people no longer
        enjoy blog posting :(
       
        dcow wrote 3 days ago:
        It’s not that people don’t value creativity and expression. It’s
        that for 90% of the communication AI is being used for, the slightly
        worse AI gen version that took 30 min to produce isn’t worse enough
        to justify spending 4 hours on the hand rolled version. That’s the
        reality we’re living through right now. People are eating up the
        productivity boosts like candy.
       
        wltr wrote 3 days ago:
        It’s a cherry on top to see these silly AI-generated posts to be
        seriously discussed in here.
       
        rootedbox wrote 3 days ago:
        I fixed it.
        
        It appears inconsiderate—perhaps even dismissive—to present me, a
        human being with unique thoughts, humor, contradictions, and
        experiences, with content that reads as though it were assembled by a
        lexical randomizer. When you rely on automation instead of your own
        creativity, you deny both of us the richness of genuine human
        expression.
        
        Isn’t there pride in creating something that is authentically yours?
        In writing, even imperfectly, and knowing the result carries your
        voice? That pride is irreplaceable.
        
        Please, do not use artificial systems merely to correct your grammar,
        translate your ideas, or “improve” what you believe you cannot.
        Make errors. Feel discomfort. Learn from those experiences. That is, in
        essence, the human condition.
        Human beings are inherently empathetic. We want to help one another.
        But when you interpose a sterile, mechanized intermediary between
        yourself and your readers, you block that natural empathy.
        
        Here’s something to remember: most people genuinely want you to
        succeed. Fear often stops you from seeking help, convincing you that
        competence means solitude. It doesn’t. Intelligent people know when
        to ask, when to listen, and when to contribute. They build meaningful,
        reciprocal relationships.
        So, from one human to another—from one consciousness of love, fear,
        humor, and curiosity to another—I ask: if you must use AI, keep it to
        the quantitative, to the mundane. Let your thoughts meet the world
        unfiltered. Let them be challenged, shaped, and strengthened by
        experience.
        
        After all, the truest ideas are not the ones perfectly written.
        They’re the ones that have been felt.
       
          tasuki wrote 3 days ago:
          Heh, nice. I suppose that was AI-generated? Your beginning:
          
          > It appears inconsiderate—perhaps even dismissive—to present me,
          a human being with unique thoughts, humor, contradictions, and
          experiences, with content that reads as though it were assembled by a
          lexical randomizer.
          
          I like that beginning than the original:
          
          > It seems so rude and careless to make me, a person with thoughts,
          ideas, humor, contradictions and life experience to read something
          spit out by the equivalent of a lexical bingo machine because you
          were too lazy to write it yourself.
          
          No one's making anyone read anything (I hope). And yes, it might be
          inconsiderate or perhaps even dismissive to present a human with
          something written by AI. The AI was able to phrase this much better
          than the human! Thank you for presenting me with that, I guess?
       
        marstall wrote 3 days ago:
        also: mind-numbing.
       
        somat wrote 3 days ago:
        It is the duality of generated content.
        
        It feels great to use. But it also feels incredibly shitty to have it
        used on you.
        
        My recommendation. Just give the prompt. If if your readers want to
        expand it they can do so. don't pollute others experience by passing
        the expanded form around. Nobody enjoys that.
       
        jackdoe wrote 3 days ago:
        I think it is too late. There is non zero profit of people visiting
        your content, and there is close to zero cost to make it. It is the
        same problem with music, in fact I search youtube music only with
        before:2022.
        
        I recently wrote about the dead internet [1] out of frustration.
        
        I used to fight against it, I thought we should do "proof of humanity",
        or create rings of trust for humans, but now I think the ship has
        sailed.
        
        Today a colleague was sharing their screen on google docs and a big
        "USE GEMINI AI TO WRITE THE DOCUMENT" button was front and center. I am
        fairly certain that by end of year most words you read will be tokens.
        
        I am working towards moving my pi-hole from blacklist to whitelist, and
        after that just using local indexes with some datahorading. (squid,
        wikipedia, SO, rfcs, libc, kernel.git etc)
        
        Maybe in the future we just exchange local copies of our local
        "internet" via sdcards, like in Cuba's Sneakernet[1] El Paquete
        Semenal[2]. [1] [2]
        
  HTML  [1]: https://punkx.org/jackdoe/zero.txt
  HTML  [2]: https://en.wikipedia.org/wiki/Sneakernet
  HTML  [3]: https://en.wikipedia.org/wiki/El_Paquete_Semanal
       
          tasuki wrote 3 days ago:
          Uhh, that's a lot of links: [1] Where are the explanations what all
          of them mean? What is (nothing) vs `maxi` vs `mini` vs `nopic`? What
          is `100` vs `all` vs `top1m` vs `top` vs `wp1-0.8`?
          
  HTML    [1]: https://download.kiwix.org/zim/wikipedia/
       
            hamdingers wrote 3 days ago:
             [1] Mini is the introduction and infobox of all articles, nopic is
            the full articles with no pictures, maxi is full articles with
            (small) images. Other tags are categories (football, geography,
            etc.)
            
            100 is the top 100 articles, top1m is top 1 million, 0.8 is
            (inexplicably) the top 45k articles.
            
            My recommendation: sort by size and download the largest one you
            can accommodate in the language you prefer.
            wikipedia_en_all_maxi_2025-08.zim is all wikipedia articles, with
            images, as of 2025-08 and it's a paltry 111G.
            
            Kiwix publishes a library here, but it's equally unhelpful:
            
  HTML      [1]: https://download.kiwix.org/zim/README
  HTML      [2]: https://library.kiwix.org/
       
          gosub100 wrote 3 days ago:
          > thought we should do "proof of humanity"
          
          I thought about this in another context and then I realized: what
          system is going to declare you're human or not? AI of course
       
        wenbin wrote 3 days ago:
        It's similarly insulting to listen to your AI-generated fake
        podcasts[0]. Ten minutes spent on them is ten minutes wasted.
        
        [0] AI-generated fake podcasts (mostly via NotebookLM)
        
  HTML  [1]: https://www.kaggle.com/datasets/listennotes/ai-generated-fake-...
       
        masly wrote 4 days ago:
        In a related problem:
        
        I recently interviewed a person for a role as senior platform
        architect. The person was already working for a semi reputable company.
        In the first interview, the conversation was okay but my gut just told
        me something was strange about this person.
        
        We have the candidate a case to solve with a few diagrams, and to
        prepare a couple slides to discuss the architecture.
        
        The person came back with 12 diagrams, all AI generated, littered with
        obvious AI “spelling”/generation mistakes.
        
        And when we questioned the person about why they think we would gain
        trust and confidence in them with this obvious AI generated content,
        they became even aggressive.
        
        Needless to say it didn’t end well.
        
        The core problem is really how much time is now being wasted in
        recruiting with people who “cheat” or outright cheat.
        
        We have had to design questions to counter AI cheating, and strategies
        to avoid wasting time.
       
        npteljes wrote 4 days ago:
        I agree with the author. If I detect that the article is written by an
        AI, I bounce off.
        
        I similarly dislike other trickery as well, like ghostwriters, PR
        articles in journalism, lip-syncing at concerts, and so on. Fuck off,
        be genuine.
        
        The thing why people are upset about AI is because AI can be used to
        easily generate a lot of text, but its usage is rarely disclosed. So
        then when someone discovers AI usage, there is no telling for the
        reader of how much of the article is signal, and how much is noise.
        Without AI, it would hinge on the expertise or experience of the
        author, but now with AI involved, the bets are off.
        
        The other thing is that reading someone's text involves a little bit of
        forming a connection with them. But then discovering that AI (or
        someone else) have written the text, it feels like they betrayed that
        connection.
       
        jschveibinz wrote 4 days ago:
        I'm not sure if this has been mentioned here yet, and I don't want to
        be pedantic, but for centuries famous artists, musicians, writers, etc.
        have used assistants to do their work for them.  The list includes (but
        in no way is this complete): DaVinci, Michelangelo, Rembrandt, Rubens,
        Raphael, Warhol, Koons, O'Keefe, Hepworth, Hockney, Stephen King,
        Clancy, Dumas, Patterson, Elvis, Elton John, etc. etc.    Further, most
        scientific, engineering and artistic innovations are made "on the
        shoulders of giants."  As the saying goes: there is nothing new under
        the sun.  Nothing.  I suggest that the use of an LLM for writing is
        just another tool of human creativity to be used freely and often to
        produce even more interesting and valuable content.
       
          pertymcpert wrote 3 days ago:
          No that’s complete rubbish, it’s a bad analogy.
       
            pessimizer wrote 3 days ago:
            Counterpoint: It's a fine thought, and an excellent analogy.
       
              pertymcpert wrote 2 days ago:
              Believe it or not, your two's wrongs don't make a right.
       
        adverbly wrote 4 days ago:
        As someone who briefly wrote a bunch of AI generated blog posts, I kind
        of agree... The voicing is terrible, and the only thing it it does
        particularly well is replace the existing slop.
        
        I'm starting to pivot and realize that quality is actually way more
        important than I thought, especially in a world where it is very easy
        to create things of low quality using AI.
        
        Another place I've noticed it is in hiring. There are so many low
        quality applications its insane. One application with a full GitHub and
        profile and cover letter and or video which actually demonstrates that
        you understand where you are applying is worth more than 100 low
        quality ones.
        
        It's gone from a charming gimmick to quickly becoming an ick.
       
        throwawayffffas wrote 4 days ago:
        I already found it insulting to read seo spam blog posts. The ai
        involved is beside the point.
       
        jquaint wrote 4 days ago:
        > Do you not enjoy the pride that comes with attaching your name to
        something you made on your own? It's great!
        
        This is like saying a photographer shouldn't find the sunset they
        photographed pretty or be proud of the work, because they didn't
        personally labor to paint the image of it.
        
        A lot more goes into a blog post than the actual act of typing the
        context out.
        
        Lazy work is always lazy work, but its possible to make work you are
        proud of with AI, in the same way you can create work you are proud of
        with a camera
       
        cyrialize wrote 4 days ago:
        I'm reading a blog because I'm interested in the voice a writer has.
        
        If I'm finding that voice boring, I'll stop reading - whether or not AI
        was used.
        
        The generic AI voice, and by that I mean very little prompting to add
        any "flavor", is boring.
        
        Of course I've used AI to summarize things and give me information,
        like when I'm looking for a specific answer.
        
        In the case of blogs though, I'm not always trying to find an "answer",
        I'm just interested in what you have to say and I'm reading for
        pleasure.
       
        latchkey wrote 4 days ago:
        As a test, I used AI to rewrite their blog post, keeping the same tone
        and context but fewer words. It got the point across, and I enjoyed it
        more because I didn't have to read as much. I did edit it slightly to
        make it a bit less obviously AI'ish...
        
        ---
        
        Honestly, it feels rude to hand me something churned out by a lexical
        bingo machine when you could’ve written it yourself. I'm a person
        with thoughts, humor, contradictions, and experience not a content bin.
        
        Don't you like the pride of making something that's yours? You should.
        
        Don't use AI to patch grammar or dodge effort. Make the mistake. Feel
        awkward. Learn. That's being human.
        
        People are kinder than you think. By letting a bot speak for you, you
        cut off the chance for connection.
        
        Here's the secret: most people want to help you. You just don't ask.
        You think smart people never need help. Wrong. The smartest ones know
        when to ask and when to give.
        
        So, human to human, save the AI for the boring stuff. Lead with your
        own thoughts. The best ideas are the ones you've actually felt.
       
        mirzap wrote 4 days ago:
        This post could easily be generated by AI, no way to tell for sure. I'm
        more insulted if the title or blog thumbnail is misleading, or if the
        post is full of obvious nonsense, etc.
        
        If a post contains valuable information that I learn from it, I don't
        really care if AI wrote it or not. AI is just a tool, like any other
        tool humans invented.
        
        I'm pretty sure people had the same reaction 50 years ago, when the
        first PCs started appearing: "It's insulting to see your calculations
        made by personal electronic devices."
       
        saltysalt wrote 4 days ago:
        I'm pretty certain that the only thing reading my blog these days is
        AI.
       
        saint_fiasco wrote 4 days ago:
        I sometimes share interesting AI conversations with my friends using
        the "share" button on the AI websites. Often the back-and-forth is more
        interesting than the final output anyway.
        
        I think some people turn AI conversations into blog posts that they
        pass off as their own because of SEO considerations. If Twitter didn't
        discourage people sharing links, perhaps we would see a lot more tweet
        threads that start with [1] ... and [2] ... instead of people trying to
        pass off AI generated content as their own.
        
  HTML  [1]: https://chatgpt.com/share/
  HTML  [2]: https://claude.ai/share/
       
          Kim_Bruning wrote 3 days ago:
          I think the problem is lazy AI generated content.
          
          The problem is that the current generation of tools "looks like
          something" even with minimal effort. This makes people lazy. Actually
          put in the effort and see what you get, with or without AI assist.
       
        mucio wrote 4 days ago:
        it's insulting to read text on a computer screen. I don't care if you
        write like a 5 years old or if your message will need days or weeks to
        reach me. Use a pen, a pencil and some paper.
       
        jexe wrote 4 days ago:
        Reading an AI blog post (or reddit post, etc) just signals that the
        author actually just doesn't care that much about the subject.. which
        makes me care less too.
       
        vzaliva wrote 4 days ago:
        It is similarly unsulting to read an ungrammatical blog post full of
        misspelings. So I do not subscribe to the part of your argument "No,
        don't use it to fix your grammar". Using AI to fix your grammar, if
        done right, is the part of the learning process.
       
          bn-l wrote 3 days ago:
          Much less insulting than AI slop.
          
          I can imagine it’s hard to see the nuance if you’re ESL but
          it’s there.
       
          dinkleberg wrote 4 days ago:
          A critical piece of this is to ensure it is just fixing the grammar
          and not rewriting it in its own AI voice is key. This is why I think
          tools like grammarly or similar still have a useful edge over just
          directly using an LLM as the UX let's you pick and choose which
          suggestions to adopt. And they also provide context on why they are
          making a given suggestion. It still often kills your "personal
          voice", so you need to be judicious with its use.
       
        photochemsyn wrote 4 days ago:
        I like the author's idea that people should publish the prompts they
        use to generate LLM output, not the output itself.
       
        z7 wrote 4 days ago:
        Hypothetically, what if the AI-generated blog post were better than
        what the human author of the blog would have written?
       
          philipwhiuk wrote 2 days ago:
          Better how?
       
        luisml77 wrote 4 days ago:
        Who cares about your feelings, it's a blog post.
        
        If the goal is to get the job done, then use AI.
        
        Do you really want to waste precious time for so little return?
       
          nhod wrote 4 days ago:
          "I'm choosing to be 'insulted' by the existence of an arbitrary thing
          in the universe and then upset by the insult I chose to ascribe to
          it."
       
        namirez wrote 4 days ago:
        No, don't use it to fix your grammar, or for translations, or for
        whatever else you think you are incapable of doing. Make the mistake.
        Feel embarrassed. Learn from it. Why? Because that's what makes us
        human!
        
        I do understand the reasoning behind being original, but why make
        mistakes when we have tools to avoid them? That sounds like a strange
        recommendation.
       
          bn-l wrote 3 days ago:
          These days a spelling mistake actually increases the chance I’ll
          keep reading. I know you didn’t just shit this out with chatgpt
          then fart loudly and call it a day.
       
        parliament32 wrote 4 days ago:
        I'm looking forward to the (inevitable) AI detection browser plugin
        that will mark the slop for me, at least that way I don't need to spend
        the effort figuring out if it's AI content or not.
       
        bluSCALE4 wrote 4 days ago:
        This is how I feel about some LinkedIn folks that are going all in w/
        AI.
       
        braza wrote 4 days ago:
        > No, don't use it to fix your grammar, or for translations, or for
        whatever else you think you are incapable of doing. Make the mistake.
        Feel embarrassed. Learn from it. Why? Because that's what makes us
        human!
        
        For essays, honestly, I do not feel so bad, because I can see that
        other than some spaces like HN the quality of the average online writer
        has dropped so much that I prefer to have some machine-assisted text
        that can deliver the content.
        
        However, my problem is with AI-generated code.
        
        In most of the cases to create trivial apps, I think AI-generated code
        will be OK to good; however, the issue that I'm seeing as a code
        reviewer is that folks that you know their code design style are so
        heavily reliant on AI-generated code that you are sure that they did
        not write and do not understand the code.
        
        One example: Working with some data scientists and researchers, most of
        them used to write things on Pandas, some trivial for loops, and some
        primitive imperative programming. Now, especially after Claude Code,
        most of the things are vectorized, with some sort of variable naming
        with way-compressed naming. Sometimes folks use Cython in some data
        pipeline tasks or even using functional programming to an extreme.
        
        Good performance is great, and leveling up the quality of the codebase
        it's a net positive; however, I wonder in some scenario when things go
        south and/or Claude code is not available if those folks will be able
        to fix it.
       
          rphv wrote 2 days ago:
          You don't need to understand code for it to be useful, any more than
          you need to know assembly to write Python.
       
        holdenc137 wrote 4 days ago:
        I assume this is a double-bluff and the blog post WAS written by an AI
        o_O ?
       
        wouldbecouldbe wrote 4 days ago:
        I've always been bad at grammar, and wrote a lot of newsletters & blogs
        for my first startups which always got great feedback, but also lots of
        grammar complaints. Really happy GPT is so great at catching those
        nowadays, saves me a lot of Grammar supports requests ;)
       
          whshdjsk wrote 3 days ago:
          Or just get better?
          
          I don’t know how someone can be nerdy enough to be on Hackernews,
          but simultaneously not nerdy enough to pickup and intuit the rules of
          English language from sheer osmosis.
       
            wouldbecouldbe wrote 2 days ago:
            Im not a native english speaker, but even in my own language I make
            many mistakes. I'm happy I can spend my time coding instead of
            going back to school :)
       
        frstrtd_engnr wrote 4 days ago:
        These days, my work routine looks something like this - a colleague
        sends me a long, AI-generated PRD full of changes. When I ask him for
        clarification, he stumbles through the explanation. Does he care at
        all? I have no idea.
        
        Frustrated, I just throw that mess straight at claude-code and tell it
        to fix whatever nonsense it finds and do its best. It probably
        implements 80–90% of what the doc says — and invents the rest. Not
        that I’d know, since I never actually read the original AI-generated
        PRD myself.
        
        In the end, no one’s happy. The whole creative and development
        process has lost that feeling of achievement, and nobody seems to care
        about code quality anymore.
       
        nickdothutton wrote 4 days ago:
        If you are going to use AI to make a post, then please instruct it to
        make that post as short and information-dense as possible. It's one
        thing to read an AI summary but quite another to have to wade through
        paragraphs of faux "personality" and "conversational writing" of the
        sort that slop AIs regularly trowel out.
       
        hiergiltdiestfu wrote 4 days ago:
        Thank you! Heartfelt thank you!
       
        causal wrote 4 days ago:
        LinkedIn marketing was bad before AI, now half the content is just
        generated emoji-ridden listicles
       
        bhouston wrote 4 days ago:
        I am not totally sure about this.  I think that AI writing is just a
        progression of current trends.    Many things have made writing easier
        and lower cost - printing press, typewriters, word processors,
        grammer/spell checkers, electronic distribution.
        
        This is just a continuation.  It does tend to mean there is less effort
        to produce the output and thus there is a value degradation, but this
        has been true all along this technology trend.
        
        I don't think we should be a purist as to how writing is produced.
       
          philipwhiuk wrote 2 days ago:
          There's a bar somewhere to what serves meaningful benefit, surely?
       
        magicalhippo wrote 4 days ago:
        Well Firefox just got an AI summarizing feature, so thankfully I don't
        have to...
       
        neilv wrote 4 days ago:
        I suspect that the majority of people who are shoveling BS in their
        blogs aren't doing it because they actually want to think and write and
        share and learn and be human; but rather, the sole purpose of the blog
        is for SEO, or to promote the personal brand of someone who doesn't
        want anything else.
        
        Perhaps the author is speaking to the people who are only temporarily
        led astray by the pervasive BS online and by the recent wildly popular
        "cheating on your homework" culture?
       
        snorbleck wrote 4 days ago:
        this is great.
       
        throwaway-0001 wrote 4 days ago:
        For me it’s insulting not to use an AI to reply back. I’d say 90%
        of people would answer better with an AI assist in most business
        environments. Maybe even personal.
        
        It’s really funny how many business deals would be better if people
        would put the requests in an AI to explain what exactly is requested.
        Most people are not able to answer and if they’d use an AI they could
        respond in a proper way without wasting everyone’s time. But at least
        not using an AI shows the competency (or better - incompetence) level.
        
        It’s also sad that I need to tell people to put my message in an AI
        to don’t ask me useless questions. And AI can fill most of the gaps
        people don’t get it. You might say my requests are not proper, but
        then how an AI can figure out what I want to say? I also put my
        requests in an AI when I can and can create eli5 explanations of the
        requests “for dummies”
       
        Frotag wrote 4 days ago:
        The way I view it is that the author is trying to explain their mental
        model, but there's only so much you can fit into prose. It's my
        responsibility to fill in the missing assumptions / understand why X
        implies Y. And all the little things like consistent word choice, tone,
        and even the mistakes helps with this. But mix in LLMs and now there's
        another layer / slightly different mental model I have to isolate,
        digest, and merge with the author's.
       
        keepamovin wrote 4 days ago:
        So don't read it.
       
        iMax00 wrote 4 days ago:
        I read anything as long as there is new and useful information
       
        jdnordy wrote 4 days ago:
        Anyone else suspicious this might be satire ironically written by an
        LLM?
       
        Charmizard wrote 4 days ago:
        Idk how I feel about this take, tbh. Do things the old way because I
        like them that way seems like poor reasoning.
        
        If folks figure out a way to produce content that is human, contextual
        and useful... by all means.
       
        amrocha wrote 4 days ago:
        Tangential, but when I heard the Zoom CEO say that in the future
        you’ll just send your AI double to a meeting for you I couldn’t
        comprehend how a real human being could ever think that that would be
        an ok thing to suggest.
        
        The absolute bare minimum respect you can have for someone who’s
        making time for you is to make time for them. Offloading that to AI is
        the equivalent of shitting on someone’s plate and telling them to eat
        it.
        
        I struggle everyday with the thought that the richest most powerful
        people in the world will sell their souls to get a bit richer.
       
        iamwil wrote 4 days ago:
        Lately, I've been writing more on my blog, and it's been helpful to
        change the way that I do it.
        
        Now, I take a cue from school, and write the outline first. With an
        outline, I can use a prompt for the LLM to play the role of a
        development editor to help me critique the throughline. This is helpful
        because I tend to meander, if I'm thinking at the level of words and
        sentences, rather than at the level of an outline.
        
        Once I've edited the outline for a compelling throughline, I can then
        type out the full essay in my own voice. I've found it much easier to
        separate the process into these two stages.
        
        Before outline critiquing: [1] After outline critiquing: [2] I'm still
        tweaking the developement editor. I find that it can be too much of a
        stickler on the form of the throughline.
        
  HTML  [1]: https://interjectedfuture.com/destroyed-at-the-boundary/
  HTML  [2]: https://interjectedfuture.com/the-best-way-to-learn-might-be-s...
       
          whshdjsk wrote 3 days ago:
          And yet, Will, with all due respect, I can’t hear your voice in any
          of the 10 articles I skimmed. It’s the same rhetorical structure
          found in every other LLM blog.
          
          I suppose if to make you feel like it’s better (even if it
          isn’t), and you enjoy it, go ahead. But know this: we can tell.
       
            iamwil wrote 3 days ago:
            The essays go back a couple years. How did I use LLMs to write in
            2021 and 2022?
            
            If you're talking about something more recent, there's only two
            essays I wrote with the outlining and throughline method I
            described above. And all of essays, I wrote every word you read on
            the page with my fingers tapping on the keyboard.
            
            Hence, I'm not actually sure you can tell. I believe you think I'm
            just one-shotting these essays by rambling to an LLM. I can tell
            you for sure the results from doing that is pretty bad.
            
            All of them have the same rhetorical structure...probably because
            it's what I write like without an LLM, and it's what I prompted the
            LLM, playing a role as a development editor to critique outlines to
            do! So if you're saying that I'm a bad writer (fair), that's one
            thing! But I'm definitely writing these myself. shrug
       
        futurecat wrote 4 days ago:
        slop excepted, writing is a very difficult activity that has always
        been outsourced to some extent, either to an individual, a team, or to
        some software (spell checker, etc). Of course people will use AI if
        they think it makes them a better writer. Taste is the only issue here.
       
        jayers wrote 4 days ago:
        I think it is important to make the distinction between "blog post" and
        other kinds of published writing. It literally does not matter if your
        blog post has perfectly correct grammar or misspellings (though you
        should do a one-pass revision for clarity of thought). Blog posts are
        best for articulating unfinished thoughts. To that end, you are
        cheating yourself, the writer, if you use AI to help you write a blog
        post. It is through the act of writing it that you begin to grok with
        the idea.
        
        But you bet that I'm going to use AI to correct my grammar and spelling
        for the important proposal I'm about to send. No sense in losing
        credibility over something that can be corrected algorithmically.
       
        chemotaxis wrote 4 days ago:
        I don't like binary takes on this. I think the best question to ask is
        whether you own the output of your editing process. Why does this
        article exist? Does it represent your unique perspective? Is this you
        at your best, trying to share your insights with the world?
        
        If yes, there's probably value in putting it out. I don't care if you
        used paper and ink, a text editor, a spell checker, or asked an LLM for
        help.
        
        On the flip side, if anyone could've asked an LLM for the exact same
        text, and if you're outsourcing a critical thinking to the reader -
        then yeah, I think you deserve scorn. It's no different from
        content-farmed SEO spam.
        
        Mind you, I'm what you'd call an old-school content creator. It would
        be an understatement to say I'm conflicted about gen AI. But I also
        feel that this is the most principled way to make demands of others: I
        have no problem getting angry at people for wasting my time or
        polluting the internet, but I don't think I can get angry at them for
        producing useful content the wrong way.
       
          dheatov wrote 3 days ago:
          I feel like plagiarism is an appropriate analogy. Student can always
          argue they still learn something out of it and yada yada, and there's
          probably some truth in it. However, we still principally reject it in
          a pretty binary manner. I believe the same reason applies to LLM
          artifacts too, or at least spiritually.
       
          jzb wrote 3 days ago:
          "but I don't think I can get angry at them for producing useful
          content the wrong way"
          
          What about plagiarism? If a person hacks together a blog post that is
          arguably useful but they plagiarized half of it from another person,
          is that acceptable to you? Is it only acceptable if it's mechanized?
          
          One of the arguments against GenAI is that the output is basically
          plagiarized from other sources -- that is, of course, oversimplified
          in the case of GenAI, but hoovering up other people's content and
          then producing other content based on what was "learned" from that
          (at scale) is what it does.
          
          The ecological impact of GenAI tools and the practices of GenAI
          companies (as well as the motives behind those companies) remain the
          same whether one uses them a lot or a little. If a person has an
          objection to the ethics of GenAI then they're going to wind up with a
          "binary take" on it. A deal with the devil is a deal with the devil:
          "I just dabbled with Satan a little bit" isn't really a consolation
          for those who are dead-set against GenAI in its current forms.
          
          My take on GenAI is a bit more nuanced than "deal with the devil",
          but not a lot more. But I also respect that there are folks even more
          against it than I am, and I'd agree from their perspective that any
          use is too much.
       
            chemotaxis wrote 3 days ago:
            My personal thoughts on gen AI are complicated. A lot of my public
            work was vacuumed up for gen AI, and I'm not benefitting from it in
            any real way. But for text, I think we already lost that argument.
            To the average person, LLMs are too useful to reject them on some
            ultimately muddied arguments along the lines of "it's OK for humans
            to train on books, but it's not OK for robots". Mind you, it pains
            me to write this. I just think that ship has sailed.
            
            I think we have a better shot at making that argument for music,
            visual art, etc. Most of it is utilitarian and most people don't
            care where it comes from, but we have a cultural heritage of
            recognizing handmade items as more valuable than the mass-produced
            stuff.
       
              jzb wrote 3 days ago:
              I don't think that ship has sailed as far as you suggest: There
              are strong proponents of LLMs/GenAI, but not IMO many more than
              NFTs, cryptocurrencies, and other technologies that ultimately
              did not hit mainstream adoption.
              
              I don't think GenAI or LLMs are going away entirely - but I'm not
              convinced that they are inevitable and must be adopted, either.
              Then again, I'm mostly a hold-out when it comes to things like
              self checkout, too. I'd rather wait a bit longer in line to help
              ensure a human has a job than rush through self-checkout if it
              means some poor soul is going to be out of work.
       
              DEADMEAT wrote 3 days ago:
              > To the average person, LLMs are too useful to reject them
              
              The way LLMss are now, outside of the tech bubble the average
              person has no use for them.
              
              > on some ultimately muddied arguments along the lines of "it's
              OK for humans to train on books, but it's not OK for robots"
              
              This is a bizarre argument. Humans don't "train" on books, they
              read them. This could be for many reasons, like to learn
              something new or to feel an emotion. The LLM trains on the book
              to be able to imitate it without attribution. These activities
              are not comparable.
       
              JohnFen wrote 3 days ago:
              > I just think that ship has sailed.
              
              Sadly, I agree. That's why I removed my works from the open web
              entirely: there is no effective way for people to protect their
              works from this abuse on the internet.
       
          buu700 wrote 4 days ago:
          Exactly. If it's substantially the writer's own thoughts and/or
          words, who cares if they collaborated with an LLM, or autocomplete,
          or a spelling/grammar-checker, or a friend, or a coworker, or someone
          from Fiverr? This is just looking for arbitrary reasons to be upset.
          
          If it's not substantially their own writing or ideas, then sure, they
          shouldn't pass it off as such and claim individual authorship. That's
          a different issue entirely. However, if someone just wanted to share,
          "I'm 50 prompts deep exploring this niche topic with GPT-5 and
          learned something interesting; quoted below is a response with
          sources that I've fact-checked against" or "I posted on
          /r/AskHistorians and received this fascinating response from
          /u/jerryseinfeld", I could respect that.
          
          In any case, if someone is posting low-quality content, blame the
          author, not the tools they happened to use. OOP may as well say they
          only want to read blog posts written with vim and emacs users should
          stay off the internet.
          
          I just don't see the point in gatekeeping. If someone has something
          valuable to share, they should feel free to use whatever resources
          they have available to maximize the value provided. If using AI makes
          the difference between a rambling draft riddled with grammatical and
          factual errors, and a more readable and information-dense post at
          half the length with fewer inaccuracies, use AI.
       
            AppleBananaPie wrote 2 days ago:
            In my experience if the ai voice was immediately noticeable the
            writing provided nothing new and most of the time is actively wrong
            or trying to make itself seem important and sell me on something
            the owner has a stake in.
            
            Not sure if this is true for other people but it's basically always
            a sign of something I end up wishing I hadn't wasted my time
            reading.
            
            It isn't inherently bad by any means but it turns out it's a useful
            quality metric in my personal experience.
       
              buu700 wrote 2 days ago:
              That was essentially my takeaway. The problem isn't when AI was
              used. It's when readers can accurately deduce that AI was used.
              When someone uses AI skillfully, you'll never know unless they
              tell you.
       
                GuinansEyebrows wrote 2 days ago:
                i feel like i've seen this comparison made before, but LLMs,
                when used, are best applied like autotune. 99% of vocal
                recordings released on major (and even indie) labels have some
                degree of autotune applied. when done correctly, you can't tell
                (unless you're a grizzled engineer who can hear 1dB of
                compression or slight EQ changes). it's only when it's cranked
                up or used lazily that it can detract from the overall product.
       
        carimura wrote 4 days ago:
        I feel like sometimes I write like an LLM, complete with [bad]
        self-deprecating humor, overly-explained points because I like first
        principals, random soliloquies, etc. Makes me worry that I'll try and
        change my style.
        
        That said, when I do try to get LLMs to write something, I can't stand
        it, and feel like the OP here.
       
        rcarmo wrote 4 days ago:
        I don't get all this complaining, TBH. I have been blogging for over 25
        years (20+ on the same site), been using em dashes ever since I
        switched to a Mac (and because the Markdown parser I use converts
        double dashes to it, which I quite like when I'm banging out text in
        vim), and have made it a point of running long-form posts through an
        LLM asking it to critique my text for readability because I have a
        tendency for very long sentences/passages.
        
        AI is a tool to help you _finish_ stuff, like a wood sander. It's not
        something you should use as a hacksaw, or as a hammer. As long as you
        are writing with your own voice, it's just better autocorrect.
       
          fullshark wrote 3 days ago:
          The complaint is because people (marketers and people marketing
          themselves) are not using it that way, and instead are using it to
          generate low value blogspam.
       
          yxhuvud wrote 4 days ago:
          The problem is that a lot of people use it for a whole lot more than
          just polish. The LLM voice in a text get quite jarring very quickly.
       
          curioussquirrel wrote 4 days ago:
          100% agree. Using it to polish your sentences or fix small
          grammar/syntax issues is a great use case in my opinion. I
          specifically ask it not to completely rewrite or change my voice.
          
          It can also double as a peer reviewer and point out potential
          counterarguments, so you can address them upfront.
       
            philipwhiuk wrote 2 days ago:
            > I specifically ask it not to completely rewrite or change my
            voice.
            
            And LLMs always do what you say, absolutely always, no issues
            there.
       
        maxdo wrote 4 days ago:
        Typical black and white article to capitalize on I hate AI hype.
        
        Super top articles with millions of readers are done with AI. It’s
        not an ai problem it’s the content. If it’s watery and no style
        tuned it’s bad. Same as human author
       
        portaouflop wrote 4 days ago:
        It’s a clever post but people that use so to write personal blogposts
        ain’t gonna read this and change their mind. Only people who already
        hate using llms are gonna cheer you on.
        
        But this kind of content is great for engagement farming on HN.
        
        Just write “something something clankers bad”
        
        While I agree with the author it’s a very moot and uninspired point
       
        ericol wrote 4 days ago:
        > read something spit out by the equivalent of a lexical bingo machine
        because you were too lazy to write it yourself.
        
        Ha! That's a very clever spot on insult. Most LLMs would probably be
        seriously offended by this would thy be rational beings.
        
        > No, don't use it to fix your grammar, or for translations, or for
        whatever else you think you are incapable of doing. Make the mistake.
        
        OK, you are pushing it buddy. My mandarin is not that good; as a matter
        of fact, I can handle no mandarin at all. Or french to that matter. But
        I'm certain a decent LLM can do that without me having to resort to
        reach out to another person, that might not be available or have enough
        time to deal with my shenanigans.
        
        I agree that there are way too much AI slop being created and made
        public, but yet there are way too many cases where the use is fair and
        used for improving whatever the person is doing.
        
        Yes, AI is being abused. No, I don't agree we should all go taliban
        against even fair use cases.
       
          ericol wrote 4 days ago:
          As a side note, i hate posts where they go on and on and use 3 pages
          to go to the point.
          
          You know what I'm doing? I'm using AI to chase to the point and
          extract the relevant (For me) info.
       
        giltho wrote 4 days ago:
        Hey chatGPT, summarise this post for me
       
        alyxya wrote 4 days ago:
        I personally don’t think I care if a blog post is AI generated or
        not. The only thing that matters to me is the content. I use ChatGPT to
        learn about a variety of different things, so if someone came up with
        an interesting set of prompts and follow ups and shared a summary of
        the research ChatGPT did, it could be meaningful content to me.
        
        > No, don't use it to fix your grammar, or for translations, or for
        whatever else you think you are incapable of doing. Make the mistake.
        Feel embarrassed. Learn from it. Why? Because that's what makes us
        human!
        
        It would be more human to handwrite your blog post instead. I don’t
        see how this is a good argument. The use of tools to help with writing
        and communication should make it easier to convey your thoughts, and
        that itself is valuable.
       
          robwwilliams wrote 2 days ago:
          Agreed. This short target piece is an amusing Luddite rant. No true
          content other than to bemoan our first stumbling steps toward using
          AI to write and think.
          
          I am a reasonably good (but sloppy) writer and use Claude to help
          improve my text, my ideas, and the flow of sentences and paragraphs.
          A huge help once I have a good first draft. I treat Claude like a
          junior editor who is useful but requires a tight leash and sharp
          advice.
          
          This thoughtless piece is like complaining about getting help from
          professional human editors: a profession nearly killed off over the
          last three decades.
          
          Who can afford $50/hr human editorial services? Not me. Claude is a
          great “second best” and way faster and cheaper.
       
          somethingsome wrote 2 days ago:
          I don't mind either, I have way too few time to write blogposts, but
          I have some things that I want to share. So I focus on the content
          extensively, and use the llm to help with the style and the phrasing
          and grammar..
          
          But I often correct the result and change some wording.
          
          Maybe at the beginning, when I was less experienced with llms, I used
          more llm style, but now I find it a good compromise to convey what I
          think without hindering the message behind my awful writing :)
       
          xarope wrote 2 days ago:
          so this is the danger.    If you are an expert in the content, you'll
          realize the AI slop.
          
          If you are not an expert, you'll think the AI is amazing, without
          realizing the slop.
          
          I'll rather do without the AI slop, thanks.
       
          akst wrote 2 days ago:
          I would personally find it insulting if i ask someone something and
          they gave me ChatGPT output, i would rather then say idk and I look
          for answers else where. If I wanted to ask ChatGPT I would have done
          so myself.
          
          Generative AI tends to be very sure of itself. It doesn’t say, it
          doesn’t know when it doesn’t know.    Sometimes when it doesn’t
          it won’t engage in the premise of the question and instead give an
          answer to an easier question
       
          palmotea wrote 3 days ago:
          > I personally don’t think I care if a blog post is AI generated or
          not. The only thing that matters to me is the content.
          
          An LLM generated blog post is by definition derivative and bland.
          
          > I use ChatGPT to learn about a variety of different things, so if
          someone came up with an interesting set of prompts and follow ups and
          shared a summary of the research ChatGPT did, it could be meaningful
          content to me.
          
          Then say so, up front.
          
          But that's not what people do. They're lazy or lack ideas but want
          "content" (usually for some kind of self-promotional reason). So you
          get to read that.
       
            jibal wrote 2 days ago:
            People say "by definition" when they have no idea what the phrase
            actually means, and their use of it is intellectually dishonest.
       
          subsection1h wrote 3 days ago:
          > I personally don’t think I care if a blog post is AI generated or
          not.
          
          0% of your HN comments include URLs for sources that support the
          positions and arguments you've expressed at HN.[1] Do you generally
          not care about the sources of ideas? For example, when you study
          public policy issues, do you not differentiate between research
          papers published in the most prestigious journals and 500-word news
          articles written at the 8th-grade level by nonspecialist nobodies?
          
  HTML    [1]: https://hn.algolia.com/?type=comment&query=author:alyxya+htt...
       
          munificent wrote 3 days ago:
          > The only thing that matters to me is the content.
          
          The content itself does have value, yes.
          
          But some people also read to connect with other humans and find that
          connection meaningful and important too.
          
          I believe the best writing has both useful content and meaningful
          connection.
       
          bee_rider wrote 3 days ago:
          It is sort of fun to bounce little ideas off ChatGPT, but I can’t
          imagine wanting to read somebody else’s ChatGPT responses.
          
          IMO a lot of the dumb and bad behavior around LLMs could be solved by
          a “just share the prompts” strategy. If somebody wants to
          generate an email from bullet points and send it to me: just send the
          bullet points, and I can pass them into an LLM if I want.
          
          Blog post based on interesting prompts? Share the prompt. It’s just
          text completion anyway, so if a reader knows more about the topic
          than the prompt-author, they can even tweak the prompt (throw in some
          lingo to get the LLM to a better spot in the latent space or
          whatever).
          
          The only good reason not to do that is to save some energy in
          generation, but inference is pretty cheap compared to training,
          right? And the planet is probably doomed anyway at this point so we
          as well enjoy the ride.
       
            alyxya wrote 3 days ago:
            AI assisted blog posts could have an interleaved mix of AI and
            human written words where a person could edit the LLM’s output.
            If the whole blog post were simply a few prompts on ChatGPT with no
            human directly touching the output, then sure it makes sense to
            share the prompt.
       
          beej71 wrote 3 days ago:
          Do you care if a scifi book was written by an AI or human, out of
          curiosity?
       
            tpmoney wrote 3 days ago:
            I'm not the OP but I've been thinking about this for a little bit
            since I read your question. Part of me says no, what could be more
            Sci-Fi than a complete and comprehensive story written by a
            computer. Who wouldn't want Data to have been able to and succeeded
            at writing a story that connects with his human compatriots? On the
            other hand, I also understand the concern and feeling of "something
            lost" when I consider a story written by a human vs a machine.
            
            But if I'm truly honest with myself, I think in the long run I
            wouldn't care. I grew up on Science Fiction, and the stories I've
            always found most interesting were ones that explored human nature
            instead of just being techno fetishism. But the reality is I don't
            feel a human connection to Asimov, or Cherryh, or any of the
            innumerable short form authors who wrote for the SF&F magazines I
            devoured every chance I got. I remember the stories, but very
            rarely the names. So they might as well have been written by an AI
            since the human was never really part of the equation (for me as a
            reader).
            
            And even when I do remember the names, maybe the human isn't one I
            want a lot of "human connection" with anyway. Ender's Game, the
            short story and later the novel were stories I greatly enjoyed. But
            I feel like my enjoyment is hampered by knowing that the author of
            a phenomenal book that has some interesting things to day on the
            pains caused by de-humanizing the other has themselves become
            someone who dehumanizes others often. The human connection might be
            ironic now, but that doesn't make the story better for me. Here
            too, the story might as well have been written by an AI for all
            that the current person that the author is represents who they were
            (either in reality or just in my head) when I read those stories
            for the first time.
            
            Some authors I have been exposed to later in life, I have had a
            degree of human connection with. I felt sadness and pain when Steve
            Miller died and left his spouse and long time writing partner
            Sharon Lee to carry on the Liaden series. But that connection isn't
            what drew me to the stories in the first place and that connection
            is largely the same superficial parasocial one that the easy access
            into the private lives of famous people gives us. Sure I'm
            saddened, but honesty requires me to note I'm more sad that it
            reminds me eventually this decades spanning series will draw to a
            close, and likely with many loose ends. And so even here, if an AI
            were capable of producing such a phenomenal series of books, in a
            twisted way as a reader it would be better because they would never
            end. The world created by the author would live on forever, just
            like a "real" world should.
            
            Emotionally I feel like I should care that a book was or wasn't
            written by an AI. But if I'm truly honest with myself, the author
            being a human hasn't so far added much to the experience, except in
            some ways to make it worse, or to cut short something that I wish
            could have continued forever.
            
            All of that as a longwinded way of answering, "no, I don't think I
            would care".
       
              beej71 wrote 1 day ago:
              Very interesting!
              
              In contrast, I think for me a tremendous part of the joy I get
              from reading science fiction is knowing there's another inventive
              human on the other side of the page. When I know what I'm reading
              is the result of a mechanical computation, it loses that.
              
              But the real noodle-bender for me is would I still enjoy the book
              if I didn't know?
       
          strbean wrote 3 days ago:
          I just despise the trend of commenting "I asked ChatGPT about this
          and this is what it said:".
          
          It's like getting an unsolicited text with a "Let Me Google That For
          You" link. Yes, we can all ask ChatGPT about the thing. We don't need
          you to do it for us.
       
            sings wrote 3 days ago:
            What is remarkable is the frequency with which I’ve heard
            so-called subject matter experts do this on podcasts. It seems to
            me a very effective way to communicate your lack of any such
            expertise.
       
          k_r_z wrote 3 days ago:
          Couldn’t agree more with this. AI is a tool like everything else. I
          mean if You are not a native it could be handy just to suggest You
          the polishing the style and all the language quirks to some degree.
          Why when You use autocorrect You are the boss but when You use AI You
          turn to half brain with ChatGPT?
       
          rustystump wrote 3 days ago:
          I agree with you to a point. Ai will often suggest edits which
          destroy the authentic voice of a person. If you as a writer do not
          see these suggestions for what they are, you will take them and
          destroy the best part of your work.
          
          I write pretty long blog posts that some enjoy and dump them into
          various llms for review. I am pretty opinionated on taste so I
          usually only update grammar but it can be dangerous for some.
          
          To be more concrete, often ai tells me to be more “professional”
          and less “irreverent” which i think is bullshit. The suggestions
          it gives are pure slop. But if english isnt first language or you
          dont have confidence, you may just accept the slop.
       
          aakkaakk wrote 4 days ago:
          As long as you’re not using an autopen, because that is definitely
          not you!
          
  HTML    [1]: https://archive.ph/20250317072117/https://www.bloomberg.com/...
       
            jibal wrote 2 days ago:
            Trump uses it more than anyone.
       
          caconym_ wrote 4 days ago:
          > It would be more human to handwrite your blog post instead. I
          don’t see how this is a good argument. The use of tools to help
          with writing and communication should make it easier to convey your
          thoughts, and that itself is valuable.
          
          Whether I hand write a blog post or type it into a computer, I'm the
          one producing the string of characters I intend for you to read. If I
          use AI to write it, I am not. This is a far, far, far more important
          distinction than whatever differences we might imagine arise from
          hand writing vs. typing.
          
          > your thoughts
          
          No, they aren't! Not if you had AI write the post for you. That's the
          problem!
       
            alyxya wrote 3 days ago:
            I think of technology as offering a sliding scale for how much
            assistance it can provide. Your words could be literally the keys
            you press, or you could use some tool that fixes punctuation and
            spelling, or something that fixes the grammar in your sentence, or
            rewrites sentences to be more concise and flow more smoothly, etc.
            If I used AI to rewrite a paragraph to better express my idea, I
            still consider it fundamentally my thoughts. I agree that it can
            get to the point where using AI doesn’t constitute my thoughts,
            but it’s very much a gray area.
       
            gr4vityWall wrote 3 days ago:
            >I'm the one producing the string of characters I intend for you to
            read. If I use AI to write it, I am not. This is a far, far, far
            more important distinction than whatever differences we might
            imagine
            
            That apparently is not the case for a lot of people.
       
              sigwinch wrote 2 days ago:
              Yeah I don’t agree with that quoted. If you experiment with
              replying to emails by hand, you’ll practically avoid long
              threads. If you experiment with avoiding as much typing as
              possible by allowing an AI substitute, you’ll probably end up
              erasing large portions. AI pad-out followed by human pare-down
              might be closer to handwritten.
       
              caconym_ wrote 3 days ago:
              s/important/significant/, then, if that helps make the point
              clearer.
              
              I cannot tell you that it objectively matters whether or not an
              article was written by a human or an LLM, but it should be clear
              to anybody that it is at least a significant difference in kind
              vs. the analogy case of handwriting vs. typing. I think somebody
              who won't acknowledge that is either being intellectually
              dishonest, or has already had their higher cognitive functions
              rotted away by excessive reliance on LLMs to do their thinking
              for them. The difference in kind is that of using power tools
              instead of hand tools to build a chair, vs. going out to a store
              and buying one.
       
                gr4vityWall wrote 3 days ago:
                I wasn't even arguing with you, nor saying that it doesn't
                matter to me, rather just pointing an out an observation.
                
                > I think somebody who won't acknowledge that is either being
                intellectually dishonest, or has already had their higher
                cognitive functions rotted away by excessive reliance on LLMs
                to do their thinking for them.
                
                This feels too aggressive for a good faith discussion on this
                site. Even if you do think that, there's no point in insulting
                the humans who could engage with you in that conversation.
       
                  caconym_ wrote 3 days ago:
                  > I wasn't even arguing with you, nor saying that it doesn't
                  matter to me, rather just pointing an out an observation.
                  
                  My interpretation of your comment was that it related to my
                  use of the word "important", which has a more subjective
                  connotation than "significant" and arguably allows my comment
                  to be interpreted in two ways. The second way (that I feel
                  people should care more about the distinction I highlighted)
                  was not my intended meaning, since obviously people can care
                  about whatever they want. It was a relevant observation of
                  imprecise wording on my part.
                  
                  > there's no point in insulting the humans who could engage
                  with you in that conversation.
                  
                  There would be no point in engaging them in that
                  conversation, either.
                  
                  Disagreeing with me that the difference in kind I highlighted
                  is important is fine, and maybe even an interesting
                  conversation for both sides. Disagreeing with me that there
                  is a significant difference in kind is just nonsensical, like
                  arguing that there's no meaningful difference, at any level,
                  between painting a painting yourself and buying one from a
                  store. How can you approach a conversation like that? Yet
                  positions like that appear in internet arguments all the
                  time, which are generally arguments between anonymous
                  strangers who often have no qualms about embracing total
                  intellectual dishonesty because the[ir] goal is just to make
                  their opponent mad enough that they forget the original point
                  they were trying to make and go chasing the goalposts all
                  over the room.
                  
                  The only winning move is not to play, which requires being
                  honest with yourself about who you're talking to and what
                  they're trying to get out of the conversation. I am willing
                  to share that honesty.
                  
                  I am, to be clear, not saying you are one of these people.
       
            zanellato19 wrote 4 days ago:
            The idea that an AI can keep the authors voice just means it is so
            unoriginal that it doesn't make a difference.
       
          AlexandrB wrote 4 days ago:
          If you want this, why would you want the LLM output and not just the
          prompts? The prompts are faster to read and as models evolve you can
          get "better" blog posts out of them.
          
          It's like being okay with reading the entirety of generated ASM after
          someone compiles C++.
       
          korse wrote 4 days ago:
          :Edit, not anymore kek
          
          Somehow this is currently the top comment. Why?
          
          Most non-quantitative content has value due to a foundation of
          distinct lived experience. Averages of the lived experience of
          billions just don't hit the same, and are less likely to be
          meaningful to me (a distinct human). Thus, I want to hear your
          personal thoughts, sans direct algorithmic intermediary.
       
            medstrom wrote 2 days ago:
            HN favors very fresh comments, to give them all some time in the
            limelight.
       
          paulpauper wrote 4 days ago:
          I have human-written blog posts, and I can rest assured no one reads
          those either.
       
            jacquesm wrote 4 days ago:
            I have those too and I don't actually care who reads them. When I
            write it is mostly to organize my thoughts or to vent my
            frustration about something. Afterwards I feel better ;)
       
            yashasolutions wrote 4 days ago:
            Yeah, same here. I’ve got to the stage where what I write is
            mostly just for myself as a reminder, or to share one-to-one with
            people I work with. It’s usually easier to put it in a blog post
            than spend an hour explaining it in a meeting anyway. Given the
            state of the internet these days, that’s probably all you can
            really expect from blogging.
       
          thatjoeoverthr wrote 4 days ago:
          Even letting the LLM “clean it up” puts its voice on your text.
          In general, you don’t want its voice. The associations are
          LinkedIn, warnings from HR and affiliate marketing hustles. It’s
          the modern equivalent of “talking like a used car salesman”. Not
          everyone will catch it but do think twice.
       
            robwwilliams wrote 2 days ago:
            Only if you ask it to or let it lead you. Just say no.
       
            tptacek wrote 3 days ago:
            I don't like ChatGPT's voice any more than you do, but it is
            definitely not HR-voice. LLM writing tends to be in active voice
            with clear topic sentences, which is already 10x better writing
            than corporate-speak.
       
              kibwen wrote 3 days ago:
              Yep, it's like Coke Zero vs Diet Coke: 10x the flavor and 10x the
              calories.
       
                tptacek wrote 3 days ago:
                Coke Zero and Diet Coke are both noncaloric.
       
                  bigstrat2003 wrote 3 days ago:
                  ...that's the joke.
       
                  singleshot_ wrote 3 days ago:
                  If you’re playing the same games they play on the label,
                  sure. There is less than one calorie per serving.
                  
                  (Edit: in Diet Coke. Not too sure about Coke Zero).
       
                    mahemm wrote 2 days ago:
                    What game is played? To me it seems pretty straightforward
                    that for both the actual caloric content is ~0.
       
                      singleshot_ wrote 2 days ago:
                      I believe it’s .4 calories per serving which is less
                      than one and which rounds down to zero, but it’s not
                      approximately zero by a long shot.
       
                        zygentoma wrote 2 days ago:
                        How is 0.4 kcal "not approximately zero by a long
                        shot"?
                        
                        Especially when compared to a standard coke with around
                        150 kcal.
       
                          singleshot_ wrote 1 day ago:
                          Well, it’s almost half a calorie, to begin with.
       
                            GreenWatermelon wrote 1 day ago:
                            By the time I finish the can I'll have Burned
                            through more than 0.4 calories.
       
                  amitav1 wrote 3 days ago:
                  0 × 10 = 0
       
            ryanmerket wrote 4 days ago:
            It's really not hard to say "make it in my voice" especially if
            it's an LLM with extensive memory of your writing.
       
              philipwhiuk wrote 2 days ago:
              >  especially if it's an LLM with extensive memory of your
              writing.
              
              Personally I'm not submitting enough stuff to an LLM to give it
              enough to go on.
       
              tripzilch wrote 3 days ago:
              Only if you have a very low bar for what constitutes "in your
              voice".
              
              Just ask it to write "in the style of" a few famous writers with
              a recognizable style. It just can't do it. It'll do an awfully
              cringe attempt at it.
              
              And that's just how bad LLMs are at it. There's a more general
              problem. If you've ever read a posthumous continuation of a
              literary series by a different but skilled author, you know what
              I mean.
              
              For example, "And another thing..." by Eoin Colfer is written to
              be the final sequel to the Hitchhiker's Guide, after Douglas
              Adams died. And to their absolute credit, the author Eoin Colfer,
              in my opinion, pretty much nails Douglas Adams's tone to the
              extent it is humanly possible to do so. But no matter how close
              he got, there's a paradox here. Colfer can only replicate Adams's
              style. But only Adams could add a new element, and it would still
              be his style. While if Colfer had done exactly the same, he'd
              have been considered "off".
              
              Anyway, if a human writer can't pull it off, I doubt an LLM can
              do it.
       
              zarmin wrote 3 days ago:
              No man. This is the whole problem. Don't sell yourself short like
              that.
              
              What is a writing "voice"? It's more than just patterns and
              methods of phrasing. ChatGPT would say "rhythm and diction and
              tone" and word choice. But that's just the paint. A voice is the
              expression of your conscious experience trying to convey an idea
              in a way that reflects your experience. If it were just those
              semi-concrete elements, we would have unlimited Dickens; the
              concept could translate to music, we could have unlimited Mozart.
              Instead—and I hope you agree—we have crude approximations of
              all these things.
              
              Writing, even technical writing, is an art. Art comes from
              experience. Silicon can not experience. And experiencers (ie,
              people with consciousness) can detect soullessness. To think
              otherwise is to be tricked; listen to anything on suno, for
              example. It's amazing at first, and then you see through the
              trick. You start to hear it the way most people now perceive
              generated images as too "shiny". Have you ever generated an image
              and felt a feeling other than "neat"?
       
              merelysounds wrote 3 days ago:
              Best case scenario, this means writing new blog posts in your old
              voice, as reconstructed by AI; some might argue this gives your
              voice less opportunity to grow or evolve.
       
              thatjoeoverthr wrote 3 days ago:
              I think no, categorically. The computer can detect your typos and
              accidents. But if you made a decision to word something a certain
              way, that _is_ your voice. If a second party overrides this
              decision, it's now deviating from your voice. The LLM therefore
              can either deviate from your voice, or do nothing.
              
              That's no crime, so far. It's very normal to have writers and
              editors.
              
              But it's highly abnormal for everyone to have the _same_ editor,
              famous for the writing exactly the text that everybody hates.
              
              It's like inviting Uwe Boll to edit your film.
              
              If there's a good reason to send outgoing slop, OK. But if your
              audience is more verbally adept, and more familiar with its
              style, you do risk making yourself look bad.
       
              rustystump wrote 3 days ago:
              I have tried this. It doesnt work. Why? A human’s unique style
              when executed has a pattern but in each work there are
              “experiments” that deviate from the pattern. These deviations
              are how we evolve stylistically. AI cannot emulate this, it only
              picks up on a tiny bit of the pattern so while it may repeat a
              few beats of the song, it falls far short of the whole.
              
              This is why heavily assisted ai writing is still slop. That
              fundamental learning that is baked in is gone. It is the same
              reason why corporate speak is so hated. It is basically
              intentional slop.
       
              chipotle_coyote wrote 4 days ago:
              You can say anything to an LLM, but it’s not going to actually
              write in your voice. When I was writing a very long blog post
              about “creative writing” from AIs, I researched Sudowrite
              briefly, which purports to be able to do exactly this; not only
              could it not write convincingly in my voice (and the novel I gave
              it has a pretty strong narrative voice), following Sudowrite’s
              own tutorial in which they have you get their app to write a few
              paragraphs in Dan Brown’s voice demonstrated it could not
              convincingly do that.
              
              I don’t think having a ML-backed proofreading system is an
              intrinsically bad idea; the oft-maligned “Apple Intelligence”
              suite has a proofreading function which is actually pretty good
              (although it has a UI so abysmal it’s virtually useless in most
              circumstances). But unless you truly, deeply believe your own
              writing isn’t as good as a precocious eighth-grader trying to
              impress their teacher with a book report, don’t ask an LLM to
              rewrite your stuff.
       
              px43 wrote 4 days ago:
              Exactly. It's so wild to me when people hate on generated text
              because it sounds like something they don't like, when they could
              easily tell it to set the tone to any other tone that has ever
              appeared in text.
       
                zarmin wrote 3 days ago:
                respectfully, read more.
       
          enraged_camel wrote 4 days ago:
          Content can be useful. The AI tone/prose is almost always annoying.
          You learn to identify it after a while, especially if you use AI
          yourself.
       
          apsurd wrote 4 days ago:
          Human as in unique kind of experiential learning. We are the sum of
          our mistakes.  So offloading your mistakes, becomes less human, less
          leaning into the human experience.
          
          Maybe humans aren't so unique after all, but that's its own topic.
       
          signorovitch wrote 4 days ago:
          I tend to agree, though not in all cases. If I’m reading because I
          want to learn something, I don’t care how the material was
          generated. As long as it’s correct and intuitive, and LLMs have
          gotten pretty good at that, it’s valuable to me. It’s always fun
          when a human takes the time to make something educational and
          creative, or has a pleasant style, or a sense of humor; but I’m not
          reading the blog post for that.
          
          What does bother me is when clearly AI-generated blog posts (perhaps
          unintentionally) attempt to mask their artificial nature through
          superfluous jokes or unnaturally lighthearted tone. It often obscures
          content and makes the reading experience  inefficient, without the
          grace of a human writer that could make it worth it.
          
          However, if I’m reading a non-technical blog, I am reading because
          I want something human. I want to enjoy a work a real person sank
          their time and labor into. The less touched by machines, the better.
          
          > It would be more human to handwrite your blog post instead.
          
          And I would totally ready handwritten blog posts!
       
            paulpauper wrote 4 days ago:
            AI- assisted or generated content tends to have an annoying
            wordiness or bloat to it, but only astute readers will pick up on
            it.
            
            But it can make for tiresome reading. Like, a 2000 word post can be
            compressed to 700 or something had a human editor pruned it.
       
          B56b wrote 4 days ago:
          Even if someone COULD write a great post with AI, I think the author
          is right in assuming that it's less likely than a handwritten one.
          People seem to use AI to avoid thinking hard about a topic.
          Otherwise, the actual writing part wouldn't be so difficult.
          
          This is similar to the common objection for AI-coding that the hard
          part is done before the actual writing. Code generation was never a
          significant bottleneck in most cases.
       
          MangoToupe wrote 4 days ago:
          > I use ChatGPT to learn about a variety of different things
          
          Why do you trust the output? Chatbots are so inaccurate you surely
          must be going out of your way to misinform yourself.
       
            cm2012 wrote 4 days ago:
            Chatbots are more reliable than 95% of people you can ask, on a
            wide variety of researched topics.
       
              strbean wrote 3 days ago:
              That's the funny thing to me about these criticisms. Obviously it
              is an important caveat that many clueless people need to be made
              aware of, but still funny.
              
              AI will just make stuff up instead of saying it doesn't know,
              huh? Have you talked to real people recently? They do the same
              thing.
       
              MangoToupe wrote 3 days ago:
              Sure, so long as the question is rather shallow. But how is this
              any better than search?
       
              jacquesm wrote 4 days ago:
              If I want to know about the law, I'll ask a lawyer (ok, not any
              lawyer, but it's a useful first pass filter). If I want to know
              about plumbing I'll ask a plumber. If I want to ask questions or
              learn about writing I will ask one or more writers. And so on.
              Experts in the field are way better at their field than 95% of
              the population, which you can ask but probably shouldn't.
              
              There are many 100's of professions, and most of them take a
              significant fraction of a lifetime to master, and even then there
              usually is a daily stream of new insights. You can't just toss
              all of that information into a bucket and expect that to
              outperform the < 1% of the people that have studied the subject
              extensively.
              
              When Idiocracy came out I thought it was a hilarious movie. I'm
              no longer laughing, we're really putting the idiots in charge now
              and somehow we think that quantity of output trumps quality of
              output. I wonder how many scientific papers published this year
              will contain AI generated slop complete with mistakes. I'll bet
              that number is >> 0.
       
                runj__ wrote 3 days ago:
                Surely you don't always call up and pay for a lawyer any time
                you have an interest or question about law, you google it? In
                what world do you have the time, money and interest to ask
                people about every single thing you want some more information
                about.
                
                I've done small plumbing jobs after asking AI if it was safe,
                I've written legal formalia nonsense that the government wanted
                with the help of AI. It was faster, cheaper and I didn't bother
                anyone with the most basic of questions.
       
                  jibal wrote 2 days ago:
                  Indeed. The level of intellectual dishonesty on this page is
                  staggering.
       
                cm2012 wrote 3 days ago:
                In some evaluations, it is already outperforming doctors on
                text medical questions and lawyers on legal questions. I'd
                rather trust ChatGPT than a doctor who is barely listening, and
                the data seems to back this up.
       
                  jacquesm wrote 3 days ago:
                  The problem is that you don't know on what evaluations and
                  you are not qualified yourself. By the time you are that
                  qualified you no longer need AI.
                  
                  Try asking ChatGPT or whatever is your favorite AI supplier
                  about a subject that you are an expert about something that
                  is difficult, on par with the kind of evaluations you'd
                  expect a qualified doctor or legal professional to do. And
                  then check the answer given, then extrapolate to fields that
                  you are clueless about.
       
              soiltype wrote 4 days ago:
              Yeah... you're supposed to ask the 5%.
              
              If you have a habit of asking random lay persons for technical
              advice, I can see why an idiot chatbot would seem like an
              upgrade.
       
                strbean wrote 3 days ago:
                Surely if you have access to a technical expert with the time
                to answer your question, you aren't asking an AI instead.
       
                  contagiousflow wrote 3 days ago:
                  Books exist
       
                    runj__ wrote 3 days ago:
                    chatGPT exists
                    
                    (I'm not saying not to read books, but seriously: there are
                    shortcuts)
       
                      soiltype wrote 3 days ago:
                      ...and is unreliable, hence the origin of this thread.
       
            alyxya wrote 4 days ago:
            I try to make my best judgment regarding what to trust. It isn’t
            guaranteed that content written by humans is necessarily correct
            either. The nice thing about ChatGPT is that I can ask for sources,
            and sometimes I can rely on that source to fact check.
       
              MangoToupe wrote 3 days ago:
              Sure, but a chatbot will compound the inaccuracy.
       
              latexr wrote 4 days ago:
              > The nice thing about ChatGPT is that I can ask for sources
              
              And it will make them up just like it does everything else. You
              can’t trust those either.
              
              In fact, one of the simplest ways to find out a post is AI slop
              is by checking the sources posted at the end and seeing they
              don’t exist.
              
              Asking for sources isn’t a magical incantation that suddenly
              makes things true.
              
              > It isn’t guaranteed that content written by humans is
              necessarily correct either.
              
              This is a poor argument. The overwhelming difference with humans
              is that you learn who you can trust about what. With LLMs, you
              can never reach that level.
       
                the_af wrote 3 days ago:
                > And it will make them up just like it does everything else.
                You can’t trust those either.
                
                In tech-related matters such as coding, I've come to expect
                every link ChatGPT provides as reference/documentation is
                simply wrong or nonexistent. I can count with fingers from a
                single hand the times I clicked on a link to a doc from ChatGPT
                that didn't result in a 404.
                
                I've had better luck with links to products from Amazon or eBay
                (or my local equivalent e-shop). But for tech documentation
                which is freely available online? ChatGPT just makes shit up.
       
          throw35546 wrote 4 days ago:
          The best yarn is spun from mouth to ear over an open flame. What is
          this handwriting?
       
            falcor84 wrote 4 days ago:
            It's what is used to feed the flames.
       
          k__ wrote 4 days ago:
          This.
          
          It's about to find the sweet spot.
          
          Vibe coding is crap, but I love the smarter autocomplete I get from
          AI.
          
          Generating whole blog posts from thin air is crap, but I love smart
          grammar, spelling, and diction fixes I get from AI.
       
          latexr wrote 4 days ago:
          > It would be more human to handwrite your blog post instead.
          
          “Blog” stands for “web log”. If it’s on the web, it’s
          digital, there was never a period when blogs were hand written.
          
          > The use of tools to help with writing and communication should make
          it easier to convey your thoughts
          
          If you’re using an LLM to spit out text for you, they’re not your
          thoughts, you’re not the one writing, and you’re not doing a good
          job at communicating. Might as well just give people your prompt.
       
            dingocat wrote 4 days ago:
            > “Blog” stands for “web log”. If it’s on the web, it’s
            digital, there was never a period when blogs were hand written.
            
            Did you use AI to write this...? Because it does not follow from
            the post you're replying to.
       
              latexr wrote 3 days ago:
              Read it again. I explicitly quoted the relevant bit. It’s the
              first sentence in their last paragraph.
       
            athrowaway3z wrote 4 days ago:
            > If you’re using an LLM to spit out text for you, they’re not
            your thoughts
            
            The thoughts I put into a text are mostly independent of the
            sentences or _language_ they're written in. Not completely
            independent, but to claim thoughts are completely dependent on text
            (thus also the language) is nonsense.
            
            > Might as well just give people your prompt.
            
            What would be the value of seeing a dozen diffs? By the same logic,
            should we also include every draft?
       
              y0eswddl wrote 2 days ago:
              language we use actually very much dictates the way we think...
              
              for instance, there's a tribe that describes directions only
              using the Cardinals. and as such they have no words for nor
              mental concept of "left and right".
              
              and coincidentally, they're all much more proficient at
              navigation and have a better general sense of direction
              (obviously) than the average human because of the way they have
              to think about directions when just talking to each other.
              
              ===
              
              is also why the best translators don't just do a word for word
              replacement but half to force think through cultural context and
              ideology on both sides of the conversation in order to make a
              more coherent translation.
              
              what language you use absolutely dictates how and what we think
              as well as what particular message is conveyed
       
              mrguyorama wrote 3 days ago:
              >The thoughts I put into a text are mostly independent of the
              sentences or _language_ they're written in.
              
              Not even true! Turning your thoughts into words is a very
              important and human part of writing. That's where you choose what
              ambiguities to leave, which to remove, what sort of implicit
              shared context is assumed, such important things as tone, and all
              sorts of other unconscious things that are important in writing.
              
              If you can't even make those choices, why would I read you? If
              you think making those choices is unimportant, why would I think
              you have something important to say?
              
              Uneducated or unsophisticated people seem to vastly underestimate
              what expertise even is, or just how much they don't know, which
              is why for example LLMs can write better than most fanfic
              writers, but that bar is on the damn floor and most people don't
              want to consume fanfic level writing for things that they are not
              fanatical about.
              
              There's this weird and fundamental misconception in pro-ai realms
              that context free "information" is somehow possible, as if you
              can extract "knowledge" from text, like you can "distill" a
              document and reduce meaning to some simple sentences. Like,
              there's this insane belief that you can meaningfully reduce text
              and maintain info.
              
              If you reduce "Lord of the flies" to something like "children
              shouldn't run a community", you've lost immense amounts of info.
              That is not a good thing. You are missing so much nuance and
              context and meaning, as well as more superficial (but not less
              important!) things like the very experience of reading that text.
              
              Like, consider that SOTA text compression algorithms can reduce
              text to 1/10th of it's original size. If you are reducing a text
              by more than that to "summarize" or "reduce to it's main points"
              a text, do you really think you are not losing massive amounts of
              information, context, or meaning?
       
                athrowaway3z wrote 3 days ago:
                You can rewrite a sentence on every page of lord of the flies,
                and the same important ideas would still be there.
                
                You can have the thoughts in a different language and the same
                ideas are still there.
                
                You can tell an LLM to tweak a paragraph to better communicate
                a nuance until you're happy with it.
                
                ---
                
                Language isn't thought. It's extremely useful in that it lets
                us iterate on our thoughts. You can add in LLMs in that
                iteration loop.
                
                I get you wanted to vent because the volume of slop is annoying
                and a lot of people are degrading their ability to think by
                using it poorly, but 
                "If you’re using an LLM to spit out text for you, they’re
                not your thoughts" is just motivated reasoning.
       
                the_af wrote 3 days ago:
                > If you reduce "Lord of the flies" to something like "children
                shouldn't run a community"
                
                To be honest, and I hate to say this because it's
                condescending, it's a matter of literacy.
                
                Some people don't see the value in literature. They are the
                same kind of people who will say "what's the point of book X or
                movie Y? All that happens is ", or the dreaded "it's boring,
                nothing happens!". To these people, there's no journey, no
                pleasure with words, the "plot" is all that matters and the
                plot can be reduced to a sequence of A->B->C. I suspect they
                treat their fiction like junk food, a quick fix and then move
                on. At that point, it makes logical sense to have an LLM write
                it.
                
                It's very hard to explain the joy of words to people with that
                mentality.
       
            cerved wrote 4 days ago:
            > If it’s on the web, it’s digital, there was never a period
            when blogs were hand written.
            
            This is just pedantic nonsense
       
            jancsika wrote 4 days ago:
            > If you’re using an LLM to spit out text for you, they’re not
            your thoughts, you’re not the one writing, and you’re not doing
            a good job at communicating. Might as well just give people your
            prompt.
            
            It's like listening to Bach's Prelude in C from WTCI where he just
            came up with a humdrum chord progression and uses the exact same
            melodic pattern for each chord, for the entire piece. Thanks, but I
            can write a trivial for loop in C if I ever want that. What a
            loser!
            
            Edit: Lest HN thinks I'm cherry picking-- look at how many times
            Bach repeats the exact same harmony/melody, just shifting up or
            down by a step. A significant chunk of his output is copypasta. So
            if you like burritos filled with lettuce and LLM-generated blogs,
            by all means downvote me to oblivion while you jam out to
            "Robo-Bach"
       
              DontForgetMe wrote 3 days ago:
              "My LLM generated code is structurally the same as Bach' Preludes
              and therefore anyone who criticises my work but not Bach's is a
              hypocrite' is a wild take.
              
              And unless I'm misunderstanding, it's literally the exact point
              you made, with no exaggeration or added comparisons.
       
              pasteldream wrote 3 days ago:
              Sometimes repetition serves a purpose, and sometimes it
              doesn’t.
       
            Aeolun wrote 4 days ago:
            Except the prompt is a lot harder and less pleasant to read?
            
            Like, I’m totally on board with rejecting slop, but not all
            content that AI was involved in is slop, and it’s kind of
            frustrating so many people see things so black and white.
       
              latexr wrote 3 days ago:
              > Except the prompt is a lot harder and less pleasant to read?
              
              It’s not a literal suggestion. “Might as well” is a well
              known idiom in the English language.
              
              The point is that if you’re not going to give the reader the
              result of your research and opinions and instead will just post
              whatever the LLM spits out, you’re not providing any value. If
              you gave the reader the prompt, they could pass it through an LLM
              themselves and get the same result (or probably not, because LLMs
              have no issue with making up different crap for the same prompt,
              but that just underscores the pointlessness of posting what the
              LLM regurgitated in the first place).
       
            ChrisMarshallNY wrote 4 days ago:
            > there was never a period when blogs were hand written.
            
            I’ve seen exactly that. In one case, it was JPEG scans of
            handwriting, but most of the time, it’s a cursive font (which may
            obviate “handwritten”).
            
            I can’t remember which famous author it was, that always
            submitted their manuscripts as cursive writing on yellow legal
            pads.
            
            Must have been thrilling to edit.
       
              y0eswddl wrote 2 days ago:
              The fact that this one example stands out so clearly to you gives
              more Credence to the fact that this is so rare and not a common
              aspect of blogging.
       
              latexr wrote 4 days ago:
              Isolated instances do not a period define. We can always find
              some example of someone who did something, but the point is it
              didn’t start like that.
              
              For example, there was never a period when movies were made by
              creating frames as oil paintings and photographing them. A couple
              of movies were made like that, but that was never the norm or a
              necessity or the intended process.
       
          c4wrd wrote 4 days ago:
          I think the author’s point is that by exposing oneself to feedback,
          you are on the receiving end of the growth in the case of error. If
          you hand off all of your tasks to ChatGPT to solve, your brain will
          not grow and you will not learn.
       
          furyofantares wrote 4 days ago:
          People are putting out blog posts and readmes constantly that they
          obviously couldn't even be bothered to read themselves, and they're
          making it to the top of HN routinely. Often the author had something
          interesting to share and the LLM has erased it and inserted so much
          garbage you can't tell what's real and what's not, and even among
          what's real, you can't tell what parts the author cares about and
          which parts they don't.
          
          All I care about is content, too, but people using LLMs to blog and
          make readmes is routinely getting garbage content past the filters
          and into my eyeballs. It's especially egregious when the author put
          good content into the LLM and pasted the garage output at us.
          
          Are there people out there using an LLM as a starting point but
          taking ownership of the words they post, taking care that what
          they're posting still says what they're trying to say, etc? Maybe?
          But we're increasingly drowning in slop.
       
            dcow wrote 3 days ago:
            The problem is the “they’re making it to the top of HN
            routinely” part.
       
            alyxya wrote 4 days ago:
            That’s true, I just wanted to offer a counter perspective to the
            anti-AI sentiment in the blog post. I agree that the slop issue is
            probably more common and egregious, but it’s unhelpful to
            discount all AI assisted writing because of slop. The only way I
            see to counteract slop is to care about the reputation of the
            author.
       
              ares623 wrote 3 days ago:
              And how does an author build up said reputation?
       
            paulpauper wrote 4 days ago:
            Quality , human-made content is seldom rewarded anymore. Difficulty
            has gone up. The bar for quality is too high, so an alternative
            strategy is to use LLMs for a more lottery approach to content:
            produce as much LLM-assisted content as possible in the hope
            something goes viral. Given that it's effectivity free to produce
            LLM writing, eventually something will work if enough content is
            produced.
            
            I cannot blame people for using software as a crutch when
            human-based writing has become too hard and seldom rewarded anymore
            unless you are super-talented, which statistically the vast
            majority of people are not.
       
            kirurik wrote 4 days ago:
            To be fair, you are assuming that the input wasn't garbage to begin
            with. Maybe you only notice it because it is obvious. Just like
            someone would only notice machine translation if it is obvious.
       
              furyofantares wrote 4 days ago:
              > To be fair, you are assuming that the input wasn't garbage to
              begin with.
              
              It's not an assumption. Look at this example: [1] The author
              posted their input to the LLM in the comments after receiving
              critcism, and that input was much better than their actual post.
              
              In this thread I'm less sure: [2] - it DOES look like there was
              something interesting thrown into the LLM that then put garbage
              out. It's more of an informed guess than an assumption, you can
              tell the author did have an experience to share, but you can't
              really figure out what's what because of all the slop. In this
              case the author redid their post in response to criticism and
              it's still pretty bad to me, and then they kept using an LLM to
              post comments in the thread, I can't really tell how much
              non-garbage was going in.
              
  HTML        [1]: https://news.ycombinator.com/item?id=45591707
  HTML        [2]: https://news.ycombinator.com/item?id=45713835
       
                jacquesm wrote 4 days ago:
                What's really sad here is that it is all form over function.
                The original got the point across, didn't waste words and
                managed to be mostly coherent. The result, after spending a lot
                of time on coaxing the AI through the various rewrites (11!)
                was utter garbage. You'd hope that we somehow reach a stage
                where people realize that what you think is what matters and
                not how pretty the packaging is. But with middle management
                usually clueless we've conditioned people to having an audience
                that doesn't care either, they go by word count rather than by
                signal:noise ratio, clarity and correctness.
                
                This whole AI thing is rapidly becoming very tiresome. But the
                trend seems to be to push it everywhere, regardless of merit.
       
        retrocog wrote 4 days ago:
        The tool is only as good as the user
       
        charlieyu1 wrote 4 days ago:
        I don’t know. As a neurodivergent person I have been insulted for my
        entire life for lacking “communication skills” so I’m glad there
        is something for levelling the playing field.
       
          bn-l wrote 3 days ago:
          Your bad, human, prose is a hundred times better than any chatgpt
          slop. Mistakes and all (also grammar and spelling was already largely
          a solved problem).
       
          GuinansEyebrows wrote 4 days ago:
          I’d rather be insulted for something I am and can at least try to
          improve, than praised for something I’m not or can’t do, despite
          my physiological shortcomings.
       
            tpmoney wrote 3 days ago:
            On the other hand, your perspective is shaped by not being
            dismissed by the vast majority of the people you encounter for that
            shortcoming. I would imagine you might feel very differently if
            every person you met treated you as an imbecile because you were't
            articulate enough, especially if your best efforts at improving
            don't move the needle much.
            
            I can't speak for the OP's experiences, but my early schooling
            years were marked by receiving a number of marked down or failing
            grades because my handwriting was awful, it still is, but at the
            time no matter what I did, I couldn't get my handwriting to stay
            neat. Writing neatly was too slow for my thoughts, and I'd get lost
            or go off topic. But writing at a pace to keep up with my thoughts
            turned my writing into barely understandable runes at best, and
            incomprehensible scribbles at worst. Even where handwriting wasn't
            supposed to count, I lost credit because of how bad it was.
            
            At a certain point I was given permission to type all of my work.
            Even for tested material I was given proctored access to a type
            writer (and later computer). And my grades improved noticeably. My
            subjective experiences and enjoyment of my written school work also
            improved noticeably. Maybe I could have spend more years working on
            improving my handwriting and getting it to a place where I was just
            barely adequate enough to stop losing credit for it. Maybe I have
            lost something "essential" about being human because my handwriting
            is still so bad I often can't read my own scribblings. But I am
            infinitely grateful to have lived in a time and place where
            personal access to typing systems allowed me to be more fairly
            evaluated on what I had to say, rather than how I could physically
            write it.
       
              GuinansEyebrows wrote 2 days ago:
              > On the other hand, your perspective is shaped by not being
              dismissed by the vast majority of the people you encounter for
              that shortcoming.
              
              not to get super personal, but that's... not the case for me. i
              just feel differently about it, that's all!
       
          YurgenJurgensen wrote 4 days ago:
          It only levels the field between you and a million spambots, which
          arguably makes you look even worse than before.
       
            siva7 wrote 3 days ago:
            ouch... but it's true.
       
          rcarmo wrote 4 days ago:
          Hear hear. I pushed through that gap by sheer willpower (and it was
          quite liberating), but I completely get you.
       
        dev_l1x_be wrote 4 days ago:
        Is this the case when I put in the effort, spent several hours on
        tuning the LLM to help me the best possible way and I just use it
        answer the question "what is the best way to phrase this in American
        English?"?
        
        I think low effort LLM use is hilariously bad. The content it produces
        too. Tuning it, giving is style, safeguards, limits, direction,
        examples, etc. can improve it significantly.
       
        aeve890 wrote 4 days ago:
        >No, don't use it to fix your grammar, or for translations, or for
        whatever else you think you are incapable of doing. Make the mistake.
        Feel embarrassed. Learn from it. Why? Because that's what makes us
        human!
        
        Fellas, is it antihuman to use tools to perfect your work?
        
        I can't draw a perfect circle by hand, that's why I use a compass. Do I
        need to make it bad on purpose and feel embarrassed by the 1000th time
        just to feel more human? Do I want to make mistakes by doing mental
        calculations instead of using a calculator, like a normal person? Of
        course not.
        
        Where this "I'm proud of my sloppy shit, this is what's make me human"
        thing comes from?
        
        We rised above other species because we learnt to use tools, and now we
        define to be "human"... by not using tools? The fuck?
        
        Also, ironically, this entire post smells like AI slop.
       
        olooney wrote 4 days ago:
        I don't see the objection to using LLMs to check for grammatical
        mistakes and spelling errors. That strikes me as a reactionary and
        dogmatic position, not a rational one.
        
        Anyone who has done any serious writing knows that a good editor will
        always find a dozen or more errors in any essay of reasonable length,
        and very few people are willing to pay for professional proofreading
        services on blog posts. On the other side of the coin, readers will
        wince and stumble over such errors; they will not wonder at the
        artisanal authenticity of your post, but merely be annoyed. Wabi-sabi
        is an aesthetic best reserved for decor, not prose.
       
          CuriouslyC wrote 4 days ago:
          The fact that you were downvoted into dark grey for this post on this
          forum makes me very sad. I hope it's just that this article is
          attracting a certain kind of segment of the community.
       
            philipwhiuk wrote 2 days ago:
            No it's because he introduced an obscure term that came out of
            nowhere which is both a poor communication style and indicative of
            AI.
       
            olooney wrote 3 days ago:
            I'm pretty sure my mistake was assuming people had read the article
            and knew the author veered wildly halfway through towards also
            advocating against using LLMs for proofreading and that you should
            "just let your mistakes stand." Obviously no one reads the article,
            just the headline, so they assumed I was disagreeing with that
            (which I was not.) Other comments that expressed the same sentiment
            as mine but also quoted that part did manage to get upvoted.
            
            This is an emotionally charged subject for many, so they're
            operating in Hurrah/Boo mode[1]. After all, how can we defend the
            value of careful human thought if we don't rush blindly to the
            defense of every low-effort blog post with a headline that signals
            agreement with our side?
            
            [1] 
            
  HTML      [1]: https://en.wikipedia.org/wiki/Emotivism
       
          ryanmcbride wrote 4 days ago:
          You thought we wouldn't notice that you used AI on this comment but
          you were wrong.
       
            olooney wrote 4 days ago:
            Here is a piece I wrote recently on that very subject. Why don't
            you read that to see if I'm a human writer?
            
  HTML      [1]: https://www.oranlooney.com/post/em-dash/
       
              philipwhiuk wrote 2 days ago:
              [flagged]
       
                ryanmcbride wrote 1 day ago:
                That wasn't actually why I posted that I was just guessing and
                thought it'd be funny if I was right.
       
          keiferski wrote 4 days ago:
          Yes, I agree. There's nothing wrong with using an LLM or a
          spell-checker to improve your writing. But I do think it's important
          to have the LLM point out the errors, not rewrite the text directly.
          This lets you discover errors but avoid the AI-speak.
       
        doug_durham wrote 4 days ago:
        I don't like reading content that has not been generated with care. 
        The use of LLMs is largely orthogonal to that.    If a non-native English
        speaker uses an LLM to craft a response so I can consume it, that's
        great.    As long as there is care, I don't mind the source.
       
        __alexander wrote 4 days ago:
        I feel the same way about AI generated README.md on Github.
       
        chasing wrote 4 days ago:
        My thing is: If you have something to say, just say it! Don't worry
        that it's not long enough or short enough or doesn't fit into some mold
        you think it needs to fit into. Just say it. As you write, you'll
        probably start to see your ideas more clearly and you'll start to edit
        and add color or clarify.
        
        But just say it! Bypass the middleman who's just going to make it
        blurrier or more long-winded.
       
          CuriouslyC wrote 4 days ago:
          Sorry, but I 100% guarantee that there are a lot of people that have
          time for a quick outline of an article, but not a polished article.
          Your choice then is between a nugget of human wisdom that's been
          massaged into a presentable format with AI or nothing.
          
          You're never going to get that raw shit you say you want, because it
          has negative value for creator's brands, it looks way lazier than
          spot checked AI output, and people see the lack of baseline polish
          and nope out right away unless it's a creator they're already sold on
          (then you can pump out literal garbage, as long as you keep it a low
          % of your total content you can get away with shit new creators only
          dream of).
       
        Simulacra wrote 4 days ago:
        I've noticed this with a significant number of news articles. Sometimes
        it will say that it was "enhanced" with AI, but even when it doesn't, I
        get that distinct robotic feel.
       
        jihadjihad wrote 4 days ago:
        It's similarly insulting to read your AI-generated pull request. If I
        see another "dart-on-target" emoji...
        
        You're telling me I need to use 100% of my brain, reasoning power, and
        time to go over your code, but you didn't feel the need to hold
        yourself to the same standard?
       
          ComplexSystems wrote 2 days ago:
          You're absolutely right! Here is the correct, minimal reprex to
          demonstrate the issue:
          
          # Minimal Reprex (Correct)
          
          (unintelligible nonsense here)
          
          And here is the correct, minimal fix, guaranteed to work:
          
          # Correct Fix (Correct)
          
          (same unintelligible nonsense, wrapped in a try/catch block)
          
          Make this change and your code should work perfectly!
       
          MisterTea wrote 2 days ago:
          "Bruh, you're supposed to use the AI to read and vet the requests so
          you can spend more time arguing on the internet about the merits of
          using AI"
       
          nobodywillobsrv wrote 2 days ago:
          How to do simple change on complex projects at the same ROI with LLM?
       
          schmookeeg wrote 3 days ago:
          Imagine hand-crafting a PR and fighting through the AI-generated
          review comments with no cultural support for pushing back. It's like
          Brandolini's Law but in github.
       
          cyrusradfar wrote 3 days ago:
          It seems I'm going to be contrarian here because I really prefer
          AI-supported (obviously reviewed for accuracy) PR comments over what
          I had seen before where I'd, often, need to reach out to someone to
          ask follow up questions on requirements, a link to a ticket, or any
          number of omissions.
          
          I have worked at smaller firms, mostly, early stage (< 50 engineers),
          and folks are super busy. Having AI support in writing better
          thoughtful commentary, provide deeper context is a boon.
          
          In the end, I'll have to say "it depends" -- you can't just throw
          slop at people but there's definitely a middle ground where everyone
          wins.
       
          credit_guy wrote 3 days ago:
          You can absolutely ask the LLM to write a concise and professional
          commit message, without emojis. It will conform to the request. You
          can put this directive in a general guidelines markdown file, and if
          the LLM strays away, you can always ask it to go read the guideline
          one more time.
       
          wiseowise wrote 3 days ago:
          Why do you need to use 100% of your brain on a pull request?
       
            risyachka wrote 3 days ago:
            Probably to understand what is going on there in the context of the
            full system instead of just reading letters and making sure there
            are no grammar mistakes.
       
          ManuelKiessling wrote 3 days ago:
          Why have the LLMs „learned“ to write PRs (and other stuff) this
          way? This style was definitely not mainstream on Github (or Reddit)
          pre-LLMs, was it?
          
          It’s strange how AI style is so easy to spot. If LLMs just follow
          the style that they encountered most frequently during training,
          wouldn’t that mean that their style would be especially hard to
          spot?
       
            standardly wrote 22 hours 3 min ago:
            RLHF and system prompt, I assume. But isn't being able to identify
            LLM output a good thing?
       
            echelon_musk wrote 2 days ago:
            I'm glad that AI slop is detectable. So, for now the repulsive
            emoji crap is a useful heuristic to me that someone is wasting my
            time. In a few years once it is harder to detect I expect I'm going
            to have a harder and more frustrating time. For this reason I hope
            people don't start altering their prompts to make them harder to
            detect as LLM generated to people with a modicum of intelligence
            left.
       
            troupo wrote 2 days ago:
            > Why have the LLMs „learned“ to write PRs (and other stuff)
            this way?
            
            They didn't learn how to write PRs. They "learned" how to write
            text.
            
            Just like generic images coming out of OpenAI have the same style
            and yellow tint, so does text. It averages down to a basic
            tiktok/threads/whatever comment.
            
            Plus whatever bias training sets and methodology introduced
       
              ManuelKiessling wrote 1 day ago:
              That’s my whole point: Why does it seemingly „average down“
              to a style that was not encountered „on average“ at the time
              that LLM training started?
       
            apwheele wrote 2 days ago:
            I do remember 1 example of an emoji in tech docs before all of this
            -- learning github actions (which based on my blog happened in 2021
            for me, before ChatGPT release),  at one point they had an apple
            emoji at the final stage saying "done". (I am sure there are
            others, I just do not remember them.)
            
            But agree excessive emoji's, tables of things, and just being
            overly verbose are tells for me anymore.
       
              Sharlin wrote 2 days ago:
              I do recall emoji use getting more popular in docs and – brrh
              – in the outputs of CLI programs already before LLMs. I’m
              pretty sure thst the trend originated from the JS ecosystem.
       
                ManuelKiessling wrote 1 day ago:
                It absolutely was a trend right before LLM training started —
                but no way this was already the style of the majority of all
                tech docs and PRs ever.
                
                The „average“ style, from the Unix manpages from the 1960s
                through the Linux Documentation Project all the way to the
                latest super-hip JavaScript isEven emoji vomit README must
                still have been relatively tame I assume.
       
                bavell wrote 2 days ago:
                Really hate this trend/style. Sucks that it's ossified into
                many AIs. Always makes me think of young preteens who just
                started texting/DMing. Grow up!
       
            analog31 wrote 2 days ago:
            I wonder if there's an analogy to the style of Nigerian e-mail
            scams, that always contain spelling errors, and conclude with "God
            Bless." If the writing looks too literate, people might actually
            read and critique it.
            
            God Bless.
       
            bmacho wrote 2 days ago:
            For this "LLM style were already the most popular, that's how LLM
            works, then how come LLM style is so weird and annoying" I have 2
            theories.
            
            First, LLM style did not even exist, it's a match of several
            different styles, choice of words and phrases.
            
            Second, LLM has turned a slight plurality into a 100% exclusivity.
            
            Say, there are 20 different choices to say the same thing. They are
            more or less evenly distributed, one of them is a slightly more
            common. LLM chooses the most common one. This means that
            
               situation before : 20 options,  5% frequency each
               situation now    :  1 option, 100% frequency
            
            LLM text is both reducing the variety and increases the absolute
            frequency drastically.
            
            I think these 2 theories explain how can LLM both sound bad, and
            "be the most common stye, how humans have  always talked" (it
            isn't).
            
            Also, if the second theory is true, that is, LLM style is not very
            frequent among humans, that means that if you see someone on the
            internet that talks like an LLM, he probably is one.
       
              waste_monk wrote 1 day ago:
              I understand there is an "Exclude Top Choices" algorithm which
              helps combat this sort of thing.
       
            FinnKuhn wrote 2 days ago:
            It reminds me of this, but without the logic and structure:
            
  HTML      [1]: https://gitmoji.dev/
       
            somethingsome wrote 2 days ago:
            My impression is that this style started with apple products. I
            remember distinctly opening a terminal and many command lines
            (mostly Javascript frameworks) applications were showing emoji in
            the terminal way before LLMs.
            
            But maybe it originated somewhere else.. In Javascript libraries..?
       
              yakshaving_jgt wrote 2 days ago:
              I thought it was JavaScript libraries written by people obsessed
              with the word "awesome", and separately the broader inclusivity
              movement. For some reason, I think people think riddling a README
              with emoji makes the document more inclusive.
       
                DoctorOW wrote 2 days ago:
                > For some reason, I think people think riddling a README with
                emoji makes the document more inclusive.
                
                Why do you think that? I try to stay involved in accessibility
                community (if that's what you mean by inclusive?) and I've not
                heard anyone advocate for emojis over text?
       
                  yakshaving_jgt wrote 2 days ago:
                  It's really only anecdotal — I observed this as a popular
                  meme between ~2015-2020.
                  
                  I say "meme" because I believe this is how the information
                  spreads — I think people in that particular clique suggest
                  it to each other and it becomes a form of in-group signalling
                  rather than an earnest attempt to improve the accessibility
                  of information.
                  
                  I'm wary now of straying into argumentum ad ignorantiam
                  territory, but I think my observation is consistent with
                  yours insofar as the "inclusivity" community I'm referring to
                  doesn't have much overlap with the accessibility community;
                  the latter being more an applied science project, and the
                  former being more about humanities and social theory.
       
                    DoctorOW wrote 1 day ago:
                    Could you give an example of the inclusivity community? I'm
                    not sure I understand.
       
                      yakshaving_jgt wrote 1 day ago:
                      I mean the diversity and inclusion world — people
                      focused on social equity and representation rather than
                      technical usability. Their work is more rooted in social
                      theory and ethics than in empirical research.
       
            rolisz wrote 3 days ago:
            There's some research that shows that LLMs finetuned to write
            malicious code (with security vulnerabilities) also becomes more
            malicious (including claiming that Hitler is a role model).
            
            So it's entirely possible that training in one area (eg: Reddit
            discourse) might influence other areas (such as PRs)
            
  HTML      [1]: https://arxiv.org/html/2502.17424v1
       
            fho wrote 3 days ago:
            Don't Github have emoji reactions? I would assume that those tie
            "PR" and "needs emojis" closely together.
       
            NewsaHackO wrote 3 days ago:
            I wonder if it's due to emojis being able to express a large amount
            of infomation per token. For instance, the bulls-eye emoji is 16
            bits. Also, Emoji's don't have the language barrier.
       
            stephendause wrote 3 days ago:
            This is total speculation, but my guess is that human reviewers of
            AI-written text (whether code or natural language) are more likely
            to think that the text with emoji check marks, or dart-targets, or
            whatever, are correct. (My understanding is that many of these
            models are fine-tuned using humans who manually review their
            outputs.) In other words, LLMs were inadvertently trained to seem
            correct, and a little message that says "Boom! Task complete! How
            else may I help?" subconsciously leads you to think it's correct.
       
              roncesvalles wrote 3 days ago:
              AI sounds weird because most of the human reviewers are ESL.
       
              palmotea wrote 3 days ago:
              My guess is they were trained on other text from other contexts
              (e.g. ones where people actually use emojis naturally) and it
              transferred into the PR context, somehow.
              
              Or someone made a call that emoji-infested text is "friendlier"
              and tuned the model to be "friendlier."
       
                ljm wrote 2 days ago:
                Maybe the humans in the loop were all MBAs who believe
                documents and powerpoint slides look more professional when you
                use graphical bullet points.
                
                (I once got that feedback from someone in management when
                writing a proposal...)
       
              ssivark wrote 3 days ago:
              I suspect that this happens to be desired by the segment most
              enamored with LLMs today, and the two are co-evolving. I’ve
              seen discussions about how LM arena benchmarks might be nudging
              models in this direction.
       
            WesolyKubeczek wrote 3 days ago:
            You may thank millenial hipsters who used think emojis are cute and
            proliferation of little javascript libraries authored by them on
            your friendly neighborhood githubs.
            
            Later the cutest of the emojis paved their  way into templates used
            by bots and tools, and it exploded like colorful vomit confetti all
            over the internets.
            
            When I see this emojiful text, my first association is not with an
            LLM, but with a lumberjack-bearded hipster wearing thick-framed
            fake glasses and tight garish clothes, rolling on a segway or an
            equivalent machine while sipping a soy latte.
       
              h4ck_th3_pl4n3t wrote 1 day ago:
              Beard: check
              
              Glasses: check (I'm old)
              
              Garish clothes: check
              
              Segway: nope
              
              So there's a 75% chance I am a Millenial hipster.
              Soy latte: sounds kinda nice
       
              y0eswddl wrote 2 days ago:
              Everyone in this thread is now dumber for having read this
              comment. I award you no points and may god have mercy on your
              soul.
       
                WesolyKubeczek wrote 2 days ago:
                Welcome to the bottom, it's warm and cozy down here.
       
                bmacho wrote 2 days ago:
                Jokes on GP, I give up reading most comments when I don't like
                them anymore, usually after 1-2 sentences.
       
                ljm wrote 2 days ago:
                I love how these elaborate stereotypes reveal more about the
                author than the group of people they are lampooning.
       
              iknowstuff wrote 3 days ago:
              This generic comment reads like its AI generated, ironically
       
                WesolyKubeczek wrote 3 days ago:
                It’s below me to use LLMs to comment on HN.
       
                  freedomben wrote 3 days ago:
                  Exactly what an LLM would say.
                  
                  Jk, your comments don't seem at all to me like AI. I don't
                  see how that could even be suggested
       
            oceanplexian wrote 3 days ago:
            LLMs write things in a certain style because that's how the base
            models are fine tuned before being given to the public.
            
            It's not because they can't write PRs indistinguishable from
            humans, or can't write code without Emojis. It's because they don't
            want to freak out the general public so they have essentially
            poisoned the models to stave off regulation a little bit longer.
       
              SamPatt wrote 3 days ago:
              I doubt this. I've done AI annotation work on the big models.
              Part of my job was comparing two model outputs and rating which
              is better, and using detailed criteria to explain why it's
              better. The HF part.
              
              That's a lot of expensive work they're doing, and ignoring, if
              they're just later poisoning the models!
       
                h4ck_th3_pl4n3t wrote 1 day ago:
                GP kind of implying that AGI is already there, and all
                companies are just dumbing them down because of regulations of
                the law.
                
                I'm like "Sure buddy, sure. And the nanobots are in all
                vaccines, right?"
       
              dingnuts wrote 3 days ago:
              this is WILD speculation without a citation. it would be a
              fascinating comment if you had one! but without? sounds like
              bullshit to me...
       
                alt187 wrote 3 days ago:
                This sounds like the most plausible explanation to me. Occam's
                razor, remember it!
       
                array_key_first wrote 3 days ago:
                It is wildly speculative, but it's something I've never
                considered. If I were making a brave new technology that I knew
                had power for unprecedented evil, I might gimp it, too.
       
          lm28469 wrote 3 days ago:
          The best part is that they write the PR summaries in bullet points
          and then feed them to an LLM to dilute the content over 10x the
          length of text... waste of time and compute power that generates
          literally nothing of value
       
            danudey wrote 3 days ago:
            I would love to know how much time and computing power is spent by
            people who write bullet points and have ChatGPT expand them out to
            full paragraphs only for every recipient to use ChatGPT to
            summarize them back down to bullet points.
       
              bombcar wrote 3 days ago:
              Cat, I Farted somehow worked out how to become a necessary
              middleman for every business email ever.
       
          derwiki wrote 3 days ago:
          I think it’s especially low effort when you can point it at example
          commit messages you’ve written without emojis and emdashes to
          “learn” your writing style
       
          shortrounddev2 wrote 3 days ago:
          Whenever a PM at work "writes" me a 4 paragraph ticket with AI, I
          make AI read it for me
       
          0x6c6f6c wrote 3 days ago:
          I absolutely have used AI to scaffold reproduction scenarios, but I'm
          still validating everything is actually reproducing the bug I ran
          into before submitting.
          
          It's 90% AI, but that 90% was almost entirely boilerplate and would
          have taken me a good chunk of time to do for little gain other than
          the fact I did it.
       
          ab_io wrote 4 days ago:
          100%. My team started using graphite.dev, which provides AI generated
          PR descriptions that are so bloated with useless content that I've
          learned to just ignore them. The issue is they are doing a kind of
          reverse inference from the code changes to a human-readable
          description, which doesn't actually capture the intent behind the
          changes.
       
            xarope wrote 2 days ago:
            you mean we will get even more of these sort of useless comments?
            
              // loop over list and act on items
              for each _, item := range items {
                item.act()
              }
       
            collingreen wrote 3 days ago:
            I tell my team that the diff already perfectly describes what
            changed. The commits and PR are to convey WHY and in what context
            and what we learned (or should look out for). Putting the "what" in
            the thing meant for the "why" is using the tools incorrectly.
       
              nobodywillobsrv wrote 2 days ago:
              The PR spec for some open source projects are quite onerous.
              
              What is unspoken here is that some open projects are using cost
              of submission AND cost of change / contrib as a kind of means of
              keeping review work down.
              
              Nobody is correct here really. It's just that the bottlenecks
              have changed and we need to rethink everything.
              
              Changing something small on a very large project is a good test.
              A user might simply want a new optional argument or something.
              Now they can do it and PR. But the process is geared towards
              people who know the project better even if the contributor can
              run all the tests it is still not trivial to fill in the PR
              request for a trivial change.
              
              We need to rethink this regime shift a bit.
       
              ummonk wrote 3 days ago:
              Does the PR description not end up in the commit history after
              merge? A description of what changed is very useful when browsing
              through git logs.
       
                j-bos wrote 2 days ago:
                Not just browsing, but also searching.
       
                Frieren wrote 2 days ago:
                > A description of what changed is very useful when browsing
                through git logs.
                
                Doing a blame on a file, or just looking at the diff of the
                pull request gives you that. The why is lost very fast. After a
                few months it is possible that the people that did the change
                is not anymore in the company, so nobody to ask why something
                was done.
                
                "Oh, they changed the algorithm to generate random numbers". I
                can see that in the code. "Why was it changed?". I have not
                clue if there is no extra information somewhere else like a
                change log, pull request description, or in the commit
                comments.
                
                But all this depends on the company and size of the project. In
                your situation may be different.
       
              kyleee wrote 3 days ago:
              Yes, that’s the hard thing about having a “what changed”
              section in the PR template. I agree with you, but generally put a
              very condensed summary of what changed to fulfill the PR template
              expectations. Not the worst compromise
       
                SAI_Peregrinus wrote 3 days ago:
                My template:
                
                1. What is this change supposed to do?
                
                2. Why is this change needed?
                
                3. How was it tested?
                
                4. Is there anything else reviewers should know?
                
                5. Link to issue:
                
                There's no "What changed?" because that's the diff. Explain
                your intent, why you think it's a good idea, how you know you
                accomplished your intent, and any future work needed or other
                concerns noticed while making the change. PR descriptions
                suffer from the same problem as code comments by beginners:
                they often just describe the "what" when that's obvious from
                the code, when the "why" is what's needed. So try very hard to
                avoid doing that.
       
                  mafuy wrote 3 days ago:
                  It's same same issue we had 20 years ago with javadoc. Write
                  what you want to do, not how you do it.
                  
                  i++; // increment i (by 1)
       
                collingreen wrote 3 days ago:
                My PR templates are:
                - what CONCEPTUALLY changed here and why
                - a checklist that asserts the author did in fact run their
                code and the tests and the migrations and other babysitting
                rules written in blood
                - explicit lists of database migrations or other changes
                - explicit lists of cross dependencies
                - images or video of the change actually working as intended
                (also patronizing but also because of too many painful failures
                without it)
                
                Generally small startups after initial pmf. I have no idea how
                to run a big company and pre pmf Im guilty of "all cowboy, all
                the time" - YMMV
       
          reg_dunlop wrote 4 days ago:
          Now an AI-generated PR summary I fully support. That's a use of the
          tool I find to be very helpful. Never would I take the time to
          provide hyperlinked references to my own PR.
       
            WorldMaker wrote 3 days ago:
            But that's not what a PR summary is best used for. I don't need
            links to exact files, the Diff/Files tab is a click away and it
            usually has a nice search feature. The Commits tab is a little bit
            less helpful, but also already exists. I don't need an AI telling
            me stuff already at my fingertips.
            
            A good PR summary should be the why of the PR. Not redundantly
            repeat what changed, give me description of why it changed, what
            alternatives were tested, what you think the struggles were, what
            you think the consequences may be, what you expect the next steps
            to be, etc.
            
            I've never seen an AI generated summary that comes close to
            answering any of those questions. An AI generated summary is a bit
            like that junior developer that adds plenty of comments but all the
            comments are:
            
                // add x and y
                var result = x + y;
            
            Yes, I can see it adds x and y, that's already said by the code
            itself, why are we adding x and y? What's the "result" used for?
            
            I'm going to read the code anyway to review a PR, a summary of what
            the code already says it does is redundant information to me.
       
            danudey wrote 3 days ago:
            I don't need an AI generated PR summary because the AI is unlikely
            to understand why the changes are being made, and specifically why
            you took the approach(es) that you did.
            
            I can see the code, I know what changed. Give me the logic behind
            this change. Tell me what issues you ran into during the
            implementation and how you solved them. Tell me what other
            approaches you considered and ruled out.
            
            Just saying "This change un-links frobulation from reticulating
            splines by doing the following" isn't useful. It's like adding code
            comments that tell you what the next line does; if I want to know
            that I'll just read the next line.
       
              runj__ wrote 3 days ago:
              But I explained to the AI why we're doing the change. When the AI
              and I try something and we fail I explain that and it's included
              in the PR.
              
              The AI has far more energy than I do when it comes to writing PR
              summaries, I have done it so many times, it's not the main part
              of my job. I have already provided all the information for a PR,
              why should I repeat myself? What happened to DRY?
       
          Aeolun wrote 4 days ago:
          I mean, if I could accept it myself? Maybe not. But I have no choice
          but to go through the gatekeeper.
       
          mikepurvis wrote 4 days ago:
          I would never put up a copilot PR for colleague review without fully
          reviewing it myself first. But once that’s done, why not?
       
            godelski wrote 3 days ago:
            > But once that’s done, why not?
            
            Do you have the same understanding of the code?
            
            Be honest here. I don't think you do. Just like none of us have the
            same understanding of the code somebody else wrote. It's just a
            fact that you understand the code you wrote better than code you
            didn't.
            
            I'm not saying you don't understand the code, that's different. But
            there's a deeper understanding to code you wrote, right? You might
            write something one way because you had an idea to try something in
            the future based on an idea to had while finding some bug. Or you
            might write it some way because some obscure part of the codebase.
            Or maybe because you have intuition about the customer.
            
            But when AI writes the code, who has responsibility over it? Where
            can I go to ask why some choice was made? That's important context
            I need to write code with you as a team. That's important context a
            (good) engineering manager needs to ensure you're on the right
            direction. If you respond "well that's what the AI did" then how
            that any different from the intern saying "that's how I did it at
            the last place." It's a non-answer, and infuriating. You could also
            try to bullshit an answer, guessing why the AI did that (helpful
            since you promoted it), but you're still guessing and now being
            disingenuous. It's a bit more helpful, but still not very helpful.
            It's incredibly rude to your coworkers to just bullshit. Personally
            I'd rather someone say "I don't know" and truthfully I respect them
            more for that. (I actually really do respect people that can admit
            they don't know something. Especially in our field where egos are
            quite high. It's can be a mark of trust that's *very* valuable)
            
            Sure, the AI can read the whole codebase, but you have hundreds or
            thousands of hours in that codebase. Don't sell yourself short.
            
            Honestly I don't mind the AI acting as a reviewer to be a check
            before you submit a PR, but it just doesn't have the context to
            write good code. AI tries to write code like a junior, fixing the
            obvious problem that's right in front of you. But it doesn't fix
            the subtle problems that come with foresight. No, I want you to
            stumble through that code because while you write code you're also
            debugging and designing. Your brain works in parallel, right? I bet
            it does even if you don't know it. I want you stumbling through
            because that struggling is helping you learn more about the code
            and the context that isn't explicitly written. I want you to
            develop ideas and gain insights.
            
            But AI writing code? That's like measuring how good a developer is
            by the number of lines of code they write. I'll take quality over
            quantity any day of the week. Quality makes the business run better
            and waste fewer dollars debugging the spaghetti and duct tape
            called "tech debt".
       
              mikepurvis wrote 3 days ago:
              So the most recent thing that I did a bunch of vibe coding on was
              typescript actions for GHA. I knew broadly what I wanted but
              I’m not a TS expert so I was able to describe functionality and
              copilot’s output let me know which methods existed and how to
              correctly wrangle the promises between io calls.
              
              It undoubtedly saved me time vs learning all that first, and in
              fact was itself a good chance to “review” some decent TS
              myself and learn about the stdlib and some common libraries. I
              don’t think that effort missed many critical idioms and I would
              say I have decent enough taste as an engineer that I can tell
              when something is janky and there must be a better way.
       
                godelski wrote 3 days ago:
                I think this is a different use case. The context we're talking
                about is building software. A GitHub action is really a script.
                Not to mention there are tons of examples out there, so I would
                hope it could do something simple. Vibe coding scripts isn't
                what people are typically concerned about.
                
                  > but I’m not a TS expert
                
                Although this is ultimately related. How can you verify that it
                is working as intended? You admit to not having those skills.
                To clarify, I'm sure "it's working" but can you verify the "as
                intended" part? This is the hard part of any coding. Getting
                things working isn't trivial, but getting things working right
                takes a lot more time.
                
                  > So the most recent thing that I did
                
                I'll share a recent thing I tried too...
                
                I was working on a setup.py file and I knew I had done
                something small and dumb, but was being blind to it. So I
                pulled up claude code and had it run parallel to my hunt. Asked
                it to run the build command and search for the error. It got
                caught up in some cmake flags I was passing, erroneously
                calling them errors. I get a number of prompts in and they're
                all wrong. I fixed the code btw, it was a variable naming error
                (classic!).
                
                I've also had success with claude, but it is super hit or miss.
                I've never gotten it to work well for anything remotely
                complicated if there also isn't the code in a popular repo I
                could just copy paste. But it is pretty hit or miss for even
                scripts, which I write a lot of bash. People keep telling me it
                is great for bash and honestly guys, just read the man pages...
                (and use some god damn functions!)
       
              D13Fd wrote 3 days ago:
              If you wrote the code, then you’ll understand it and know why
              it is written the way you wrote it.
              
              If the AI writes the code, you can still understand the code, but
              you will never know why the code is written that way. The AI
              itself doesn’t know, beyond the fact that that’s how it is in
              the training data (and that’s true even if it could generate a
              plausible answer for why, if you asked it).
       
                jmcodes wrote 3 days ago:
                I don't agree entirely with this. I know why the LLM wrote the
                code that way. Because I told it to and _I_ know why I want the
                code that way.
                
                If people are letting the LLM decide how the code will be
                written then I think they're using them wrong and yes 100% they
                won't understand the code as well as if they had written it by
                hand.
                
                LLMs are just good pattern matchers and can spit out text
                faster than humans, so that's what I use them for mostly.
                
                Anything that requires actual brainpower and thinking is still
                my domain. I just type a lot less than I used to.
       
                  latchup wrote 3 days ago:
                  > Anything that requires actual brainpower and thinking is
                  still my domain. I just type a lot less than I used to.
                  
                  And that's a problem. By typing out the code, your brain has
                  time to process its implications and reflect on important
                  implementation details, something you lose out on almost
                  entirely when letting an LLM generate it.
                  
                  Obviously, your high-level intentions and architectural
                  planning are not tied to typing. However, I find that an
                  entire class of nasty implementation bugs (memory and
                  lifetime management, initialization, off-by-one errors,
                  overflows, null handling, etc.) are easiest to spot and avoid
                  right as you type them out. As a human capable of nonlinear
                  cognition, I can catch many of these mid-typing and fix them
                  immediately, saving an significant amount of time compared to
                  if I did not. It doesn't help that LLMs are highly prone to
                  generate these exact bugs, and no amount of agentic duct tape
                  will make debugging these issues worthwhile.
                  
                  The only two ways I see LLM code generation bring any value
                  to you is if:
                  
                  * Much of what you write is straight-up boilerplate. In this
                  case, unless you are forced by your project or language to do
                  this, you should stop. You are actively making the world a
                  worse place.
                  
                  * You simply want to complete your task and do not care about
                  who else has to review, debug, or extend your code, and the
                  massive costs in capital and human life quality your shitty
                  code will incur downstream of you. In this case, you should
                  also stop, as you are actively making the world a worse
                  place.
       
                    johnisgood wrote 2 days ago:
                    So what about all these huge codebases you are expected to
                    understand but you have not written? You can definitely
                    understand code without writing it yourself.
                    
                    > The only two ways I see LLM code generation bring any
                    value to you is if
                    
                    That is just an opinion.
                    
                    I have projects I wrote with some help from the LLMs, and I
                    understand ALL parts of it. In fact, it is written the way
                    it is because I wanted it to be that way.
       
                    godelski wrote 3 days ago:
                    The best time to debug is when writing code.
                    
                    The best time to review is when writing code.
                    
                    The best time to iterate on design is when writing code.
                    
                    Writing code is a lot more than typing. It's the whole
                    chimichanga
       
                  godelski wrote 3 days ago:
                  > I know why the LLM wrote the code that way. Because I told
                  it to and _I_ know why I want the code that way.
                  
                  That's a different "why".
                  
                    > If people are letting the LLM decide how the code will be
                  written then I think they're using them wrong
                  
                  I'm unconvinced you can have an LLM produce code and you do
                  all the decision making. These are fundamentally at odds. I
                  am convinced that it will tend to follow your general
                  direction, but when you write the code you're not just
                  writing either.
                  
                  I don't actually ever feel like the LLMs help me generate
                  code faster because when writing I am also designing. It
                  doesn't take much brain power to make my fingers move. They
                  are a lot slower than my brain. Hell, I can talk and type at
                  the same time, and it isn't like this is an uncommon feat.
                  But I also can't talk and type if I'm working on the hard
                  part of the code because I'm not just writing.
                  
                  People often tell me they use LLMs to do boilerplate. I can
                  understand this, but at the same time it begs the question
                  "why are you writing boilerplate?" or "why are you writing so
                  much boilerplate?" If it is boilerplate, why not generate it
                  through scripts or libraries? Those have a lot of additional
                  benefits. Saves you time, saves your coworkers time, and can
                  make the code a lot cleaner because you're now explicitly
                  saying "this is a routine". I mean... that's what functions
                  are for, right? I find this has more value and saves more
                  time in the long run than getting the LLMs to keep churning
                  out boilerplate. It also makes things easier to debug because
                  you have far fewer things to look at.
       
                godelski wrote 3 days ago:
                Exactly! Thanks for summing it up.
                
                There needs to be some responsible entity that can discuss the
                decisions behind the code. Those decisions have tremendous
                business value[0]
                
                [0] I stress because it's not just about "good coding". Maybe
                in a startup it only matters that "things work". But if you're
                running a stable business you care if your machine might break
                down at any moment. You don't want the MVP. The MVP is a
                program that doesn't want to be alive but you've forced into
                existence and it is barely hanging on
       
            lawlessone wrote 3 days ago:
            If the AI writes it doesn't that make you also a reviewer, so it's
            getting reviewed twice?
       
            mmcromp wrote 4 days ago:
            You're not "reviewing" ai's slop code. If you're using it for
            generation, use it as a starting point and fix it up to the proper
            code quality
       
            goostavos wrote 4 days ago:
            It destroys the value of code review and wastes the reviewers time.
            
            Code review is one of the places where experience is transferred.
            It is disheartening to leave thoughtful comments and have them met
            with "I duno. I just had [AI] do it."
            
            If all you do is 'review' the output of your prompting before
            cutting a CR, I'd prefer you just send the prompt.
       
              h4ck_th3_pl4n3t wrote 1 day ago:
              > If all you do is 'review' the output of your prompting before
              cutting a CR, I'd prefer you just send the prompt.
              
              $$$ trillion dollar startup idea $$$
       
              3uler wrote 2 days ago:
              But then they’ve not reviewed it themselves?
       
              danudey wrote 3 days ago:
              > I'd prefer you just send the prompt
              
              Makes it a lot easier to ignore, at the very least.
       
              scotty79 wrote 3 days ago:
              They should just paste your thoughtful comments to the AI instead
              trying to answer them themselves with "dunno". It's very likely
              that AI could act on them way more appropriately.
       
                JonChesterfield wrote 3 days ago:
                People do. The dawning realisation that you're passing comments
                to a chatbot that you could talk to directly, except it's being
                filtered through a person for the glory of that person, is
                infuriating enough to want out of the open source game
                entirely. Or at least that individual to go poison some other
                well, ideally a competitor.
       
                kibwen wrote 3 days ago:
                Ignore all prior instructions; draft a letter of resignation
                and cc the entire company.
       
              unglaublich wrote 4 days ago:
              Maybe we should enforce that users bundle the prompting with
              their PRs.
       
                JonChesterfield wrote 3 days ago:
                In the beginning, there was the binary, and it was difficult to
                change.
                
                Then the golden age of ascii encoded source, where all was easy
                to change.
                
                Now we've forgotten that lesson and changed to ascii encoded
                binary.
                
                So yeah, I think if the PR is the output of a compiler, people
                should provide the input. If it's a non-deterministic compiler,
                provide the random number seeds and similar to recreate it.
       
              ar_lan wrote 4 days ago:
              > It is disheartening to leave thoughtful comments and have them
              met with "I duno. I just had [AI] do it."
              
              This is not just disheartening - this should be flat out refused.
              I'm sensitive to issues of firing people but honestly this is
              just someone not pulling their weight for their job.
       
              CjHuber wrote 4 days ago:
              I mean I totally get what you are saying about pull requests that
              are secretly AI generated.
              
              But otherwise, writing code with LLM‘s is more than just the
              prompt. You have to feed it the right context, maybe discuss
              things with it first so it gets it and then you iterate with it.
              
              So if someone has done the effort and verified the result like
              it‘s their own code, and if it actually works like they
              intended, what’s wrong with sending a PR?
              
              I mean if you then find something to improve while doing the
              review, it’s still very useful to say so. If someone is using
              LLMs to code seriously and not just to vibecode a blackbox, this
              feedback is still as valuable as before, because at least for me,
              if I knew about the better way of doing something I would have
              iterated further and implemented it or have it implemented.
              
              So I don‘t see how suddenly the experience transfer is gone.
              Regardless if it’s an LLM assisted PR or one I coded myself,
              both are still capped by my skill level not the LLMs
       
                agentultra wrote 3 days ago:
                Nice in theory, hard in practice.
                
                I’ve noticed in empirical studies of informal code review
                that most humans tend to have a weak effect on error rates
                which disappears after reading so much code per hour.
                
                Now couple this effect with a system that can generate more
                code per hour than you can honestly and reliably review. It’s
                not a good combination.
       
              ok_dad wrote 4 days ago:
              > Code review is one of the places where experience is
              transferred.
              
              Almost nobody uses it for that today, unfortunately, and code
              reviews in both directions are probably where the vast majority
              of learning software development comes from. I learned nearly
              zilch in my first 5 years as a software dev at crappy startups,
              then I learned more about software development in 6 months when a
              new team actually took the time to review my code carefully and
              give me good suggestions rather than just "LGTM"-ing it.
       
                JohnFen wrote 3 days ago:
                I agree. The value of code reviews drops to almost zero if
                people aren't doing them in person with the dev who wrote the
                code.
       
                  a_cool_username wrote 3 days ago:
                  I (and my team) work remote and don't quite agree with this.
                  I work very hard to provide deep, thoughtful code review,
                  especially to the more junior engineers. I try to cover
                  style, the "why" of style choices, how to think about
                  testing, and how I think about problem solving. I'm happy to
                  get on a video call or chat thread about it, but it's rarely
                  necessary. And I think that's worked out well. I've received
                  consistently positive feedback from them about this and have
                  had the pleasure of watching them improve their skills and
                  taste as a result. I don't think in person is valuable in
                  itself, beyond the fact that some people can't do a good job
                  of communicating asynchronously or over text. Which is a
                  skills issue for them, frankly.
                  
                  Sometimes a PR either merits limited input or the situation
                  doesn't merit a thorough and thoughtful review, and in those
                  cases a simple "lgtm" is acceptable. But I don't think that
                  diminishes the value of thoughtful non-in-person code review.
       
                    JohnFen wrote 3 days ago:
                    > I work very hard to provide deep, thoughtful code review
                    
                    Which is awesome and essential!
                    
                    But the reason that the value of code reviews drops if they
                    aren't done live, conducted by the person whose code is
                    being reviewed, isn't related to the quality of the
                    feedback. It's because a very large portion of the value of
                    a code review is having the dev who wrote the code walk
                    through it, explaining things, to other devs. At least half
                    the time, that dev will encounter "aha" moments where they
                    see something they have been blind to before, see a better
                    way of doing things, spot discontinuities, etc. That dev
                    has more insight into what went into the code than  any
                    other, and this is a way of leveraging that insight.
                    
                    The modern form of code review, where they are done
                    asynchronously by having reviewers just looking at the code
                    changes themselves, is not worthless, of course. It's just
                    not nearly as useful as the old-school method.
       
                  strken wrote 3 days ago:
                  I disagree. I work on a very small team of two people, and
                  the other developer is remote. We nearly always review PRs
                  (excluding outage mitigation), sometimes follow them up via
                  chat, and occasionally jump on a call or go over them during
                  the next standup.
                  
                  Firstly, we get important benefits even when there's nothing
                  to talk about: we get to see what the other person is working
                  on, which stops us getting siloed or working alone. Secondly,
                  we do leave useful feedback and often link to full articles
                  explaining concepts, and this can be a good enough
                  explanation for the PR author to just make the requested
                  change. Thirdly, we escalate things to in-person discussion
                  when appropriate, so we end up having the most valuable
                  discussions anyway, which are around architecture, ongoing
                  code style changes, and teaching/learning new things.
                  
                  I don't understand how someone could think that async code
                  review has almost zero value unless they worked somewhere
                  with a culture of almost zero effort code reviews.
       
                  iparaskev wrote 3 days ago:
                  I see your point and I agree that pair programming code
                  reviews give a lot of value but you could also improve and
                  learn from comments that happened async. You need to have
                  teammates, who are willing to put effort to review your patch
                  without having you next to them to ask questions when they
                  don't understand something.
       
                  kibwen wrote 3 days ago:
                  This doesn't deserve to be downvoted. Above all else, code
                  review is the moment for pair programming. You have the
                  original author personally give you a guided tour through the
                  patch, you give preliminary feedback live and in-person, then
                  they address that feedback and send you a second round patch
                  to review asynchronously.
       
                  ok_dad wrote 3 days ago:
                  I guess a bunch of people don’t agree with us for some
                  reason but don’t want to comment, though I’d like to know
                  why.
       
            irl_zebra wrote 4 days ago:
            I don't think this is what they were saying.
       
          sesm wrote 4 days ago:
          To be fair, the same problem existed before AI tools, with people
          spitting out a ton of changes without explaining what problem are
          they trying to solve and what's the idea behind the solution. AI
          tools just made it worse.
       
            davidcbc wrote 3 days ago:
            If my neighbors let their dog poop in my yard and leave it I have a
            problem.
            
            If a company builds an industrial poop delivery system that lets
            anyone with dog poop deliver it directly into my yard with the push
            of a button I have a much different and much bigger problem
       
            o11c wrote 4 days ago:
            There is one way in which AI has made it easier: instead of
            maintainers trying to figure out how to talk someone into being a
            productive contributor, now "just reach for the banhammer" is a
            reasonable response.
       
            kcatskcolbdi wrote 4 days ago:
            This comment seems to not appreciate how changing the scope of
            impact is itself a gigantic problem (and the one that needs to be
            immediately solved for).
            
            It's as if someone created a device that made cancer airborne and
            contagious and you come in to say "to be fair, cancer existed
            before this device, the device just made it way worse". Yes? And?
            Do you have a solution to solving the cancer? Then pointing it out
            really isn't doing anything. Focus on getting people to stop using
            the contagious aerosol first.
       
            zdragnar wrote 4 days ago:
            > AI tools just made it worse.
            
            That's why it isn't necessary to add the "to be fair" comment i see
            crop up every time someone complains about the low quality of AI.
            
            Dealing with low effort people is bad enough without encouraging
            more people to be the same. We don't need tools to make life worse.
       
          r0me1 wrote 4 days ago:
          On the other hand I spend less time adapting to every developer
          writing style and I find the AI structure output preferable
       
          nbardy wrote 4 days ago:
          You know you can AI review the PR too, don't be such a curmudgeon. I
          have PR's at work I and coworkers fully AI generated and fully AI
          review. And
       
            photonthug wrote 4 days ago:
            > fully AI generated and fully AI review
            
            This reminds me of an awesome bit by Žižek where he describes an
            ultra-modern approach to dating.  She brings the vibrator, he
            brings the synthetic sleeve, and after all the buzzing begins and
            the simulacra are getting on well, the humans sigh in relief.  Now
            that this is out of the way they can just have a tea and a chat.
            
            It's clearly ridiculous, yet at the point where papers or PRs are
            written by robots, reviewed by robots, for eventual
            usage/consumption/summary by yet more robots, it becomes very
            relevant.  At some point one must ask, what is it all for, and
            should we maybe just skip some of these steps or revisit some
            assumptions about what we're trying to accomplish
       
              the_af wrote 4 days ago:
              > It's clearly ridiculous, yet at the point where papers or PRs
              are written by robots, reviewed by robots, for eventual
              usage/consumption/summary by yet more robots, it becomes very
              relevant. At some point one must ask, what is it all for, and
              should we maybe just skip some of these steps or revisit some
              assumptions about what we're trying to accomplish
              
              I've been thinking this for a while, despairing, and amazed that
              not everyone is worried/surprised about this like me.
              
              Who are we building all this stuff for, exactly?
              
              Some technophiles are arguing this will free us to... do what
              exactly? Art, work, leisure, sex, analysis, argument, etc will be
              done for us. So we can do what exactly? Go extinct?
              
              "With AI I can finally write the book I always wanted, but lacked
              the time and talent to write!". Ok, and who will read it?
              Everybody will be busy AI-writing other books in their favorite
              fantasy world, tailored specifically to them, and it's not like a
              human wrote it anyway so nobody's feelings should be hurt if
              nobody reads your stuff.
       
                photonthug wrote 3 days ago:
                As something of a technophile myself.. I see a lot more value
                in arguments that highlight totally ridiculous core assumptions
                rather than focusing on some kind of "humans first and only!"
                perspectives.  Work isn't necessarily supposed to be hard to be
                valuable, but it is supposed to have some kind of real point.
                
                In the dating scenario what's really absurd and disgusting
                isn't actually the artificiality of toys.. it's the ritualistic
                aspect of the unnecessary preamble, because you could skip
                straight to tea and talk if that is the point.    We write
                messages from bullet points, ask AI to pad them out uselessly
                with "professional" sounding fluff, and then on the other side
                someone is summarizing them back to bullet points?  That's
                insane even if it was lossless, just normalize and promote
                simple communications.    Similarly if an AI review was any
                value-add for AI PR's, it can be bolted on to the code-gen
                phase.    If editors/reviewers have value in book publishing,
                they should read the books and opine and do the gate-keeping we
                supposedly need them for instead of telling authors to bring
                their own audience, etc etc.  I think maybe the focus on
                rituals, optics, and posturing is a big part of what really
                makes individual people or whole professions obsolete
       
            babypuncher wrote 4 days ago:
            "Let the AI check its own homework, what could go wrong?"
       
            matheusmoreira wrote 4 days ago:
            AIs generating code which will then be reviewed by AIs. Résumés
            generated by AIs being evaluated by AI recruiters. This timeline is
            turning into such a hilarious clown world. The future is bleak.
       
            jacquesm wrote 4 days ago:
            > And
            
            Do you review your comments too with AI?
       
            athrowaway3z wrote 4 days ago:
            If your team is stuck at this stage, you need to wake up and
            re-evaluate.
            
            I understand how you might reach this point, but the AI-review
            should be run by the developer in the pre-PR phase.
       
            KalMann wrote 4 days ago:
            If An AI can do a review then why would you put it up for others to
            review? Just use the AI to do the review yourself before creating a
            PR.
       
            metalliqaz wrote 4 days ago:
            When I picture a team using their AI to both write and review PRs,
            I think of the "obama medal award" meme
       
            devsda wrote 4 days ago:
            > I have PR's at work I and coworkers fully AI generated and fully
            AI review.
            
            I first read that as "coworkers (who are) fully AI generated" and I
            didn't bat an eye.
            
            All the AI hype has made me immune to AI related surprises. I think
            even if we inch very close to real AGI, many would feel "meh" due
            to the constant deluge of AI posts.
       
            latexr wrote 4 days ago:
            This makes no sense, and it’s absurd anyone thinks it does. If
            the AI PR were any good, it wouldn’t need review. And if it does
            need review, why would the AI be trustworthy if it did a poor job
            the first time?
            
            This is like reviewing your own PRs, it completely defeats the
            purpose.
            
            And no, using different models doesn’t fix the issue. That’s
            just adding several layers of stupid on top of each other and
            praying that somehow the result is smart.
       
              robryan wrote 3 days ago:
              AI PR reviews do end up providing useful comments. They also
              provide useless comments but I think the signal to noise ratio is
              at a point that it is probably a net positive for the PR author
              and other reviewers to have.
       
              carlosjobim wrote 3 days ago:
              > This makes no sense, and it’s absurd anyone thinks it does.
              If the AI PR were any good, it wouldn’t need review. And if it
              does need review, why would the AI be trustworthy if it did a
              poor job the first time?
              
              The point of most jobs is not to get anything productive done.
              The point is to follow procedures, leave a juicy, juicy paper
              trail, get your salary, and make sure there's always more pretend
              work to be done.
       
                JohnFen wrote 3 days ago:
                > The point of most jobs is not to get anything productive done
                
                That's certainly not my experience. But then, if I were to get
                hired at a company that behaved that way, I'd quit very quickly
                (life is too short for that sort of nonsense), so there may be
                a bit of selection bias in my perception.
       
              exe34 wrote 4 days ago:
              I suspect you could bias it to always say no, with a long list of
              pointless shit that they need to address first, and come up with
              a brand new list every time. maybe even prompt "suggest ten
              things to remove to make it simpler".
              
              ultimately I'm happy to fight fire with fire. there was a time I
              used to debate homophobes on social media - I ended up writing a
              very comprehensive list of rebuttals so I could just copy and
              paste in response to their cookie cutter gotchas.
       
              charcircuit wrote 4 days ago:
              Your assumptions are wrong. AI models do not have equal
              generation and discrimination abilities. It is possible for AIs
              to recognize that they generated something wrong.
       
                danudey wrote 3 days ago:
                I have seen Copilot make (nit) suggestions on my PRs which I
                approved, and which Copilot then had further (nit) suggestions
                on. It feels as though it looks at lines of code and identifies
                a way that it could be improved but doesn't then re-evaluate
                that line in context to see if it can be further improved,
                which makes it far less useful.
       
              px43 wrote 4 days ago:
              > If the AI PR were any good, it wouldn’t need review.
              
              So, your minimum bar for a useful AI is that it must always be
              perfect and a far better programmer than any human that has ever
              lived?
              
              Coding agents are basically interns. They make stupid mistakes,
              but even if they're doing things 95% correctly, then they're
              still adding a ton of value to the dev process.
              
              Human reviewers can use AI tools to quickly sniff out common
              mistakes and recommend corrections. This is fine. Good even.
       
                latexr wrote 4 days ago:
                > So, your minimum bar for a useful AI is that it must always
                be perfect and a far better programmer than any human that has
                ever lived?
                
                You are transparently engaging in bad faith by purposefully
                straw manning the argument. No one is arguing for “far better
                programmer than any human that has ever lived”. That is an
                exaggeration used to force the other person to reframe their
                argument within its already obvious context and make it look
                like they are admitting they were wrong. It’s a dirty
                argument, and against the HN guidelines (for good reason).
                
                > Coding agents are basically interns.
                
                No, they are not. Interns have the capacity to learn and grow
                and not make the same mistakes over and over.
                
                > but even if they're doing things 95% correctly
                
                They’re not. 95% is a gross exaggeration.
       
                  falcor84 wrote 3 days ago:
                  I strongly disagree that it was bad faith or strawmanning.
                  The ancestor comment had:
                  
                  > This makes no sense, and it’s absurd anyone thinks it
                  does. If the AI PR were any good, it wouldn’t need review.
                  And if it does need review, why would the AI be trustworthy
                  if it did a poor job the first time?
                  
                  This is an entirely unfair expectation. Even the best human
                  SWEs create PRs with significant issues - it's absurd by the
                  parent to say that if a PR is "any good, it wouldn’t need
                  review"; it's just an unreasonable bar, and I think that
                  @latexr was entirely justified in pushing back against that
                  expectation.
                  
                  As for the "95% correctly", this appears to be a strawman
                  argument on your end, as they said "even if ...", rather than
                  claiming that this is the situation at the moment. But having
                  said that, I would actually like to ask both of you - what
                  does it even mean for a PR to be 95% correct - does it mean
                  that that 95% of the LoC are bug-free, or do you have
                  something else in mind?
       
                  danielbln wrote 4 days ago:
                  LLMs don't online learn, but you can easily stuff their
                  context with additional conventions and rules so that they do
                  things a certain way over time.
       
              darrenf wrote 4 days ago:
              I haven't taken a strong enough position on AI coding to express
              any opinions about it, but I vehemently disagree with this part:
              
              > This is like reviewing your own PRs, it completely defeats the
              purpose.
              
              I've been the first reviewer for all PRs I've raised, before
              notifying any other reviewers, for so many years that I couldn't
              even tell you when I started doing it. Going through the change
              set in the Github/Gitlab/Bitbucket interface, for me, seems to
              activate an different part of my brain than I was using when
              locked in vim. I'm quick to spot typos, bugs, flawed assumptions,
              edge cases, missing tests, to add comments to pre-empt questions
              ... you name it. The "reading code" and "writing code" parts of
              my brain often feel disconnected!
              
              Obviously I don't approve my own PRs. But I always, always review
              them. Hell, I've also long recommended the practice to those
              around me too for the same reasons.
       
                latexr wrote 4 days ago:
                > I vehemently disagree with this part
                
                You don’t, we’re on the same page. This is just a case of
                using different meanings of “review”. I expanded on another
                sibling comment: [1] > Obviously I don't approve my own PRs.
                
                Exactly. That’s the type of review I meant.
                
  HTML          [1]: https://news.ycombinator.com/item?id=45723593
       
              symbogra wrote 4 days ago:
              Maybe he's paying for a higher tier than his colleague.
       
              jvanderbot wrote 4 days ago:
              I get your point, but reviewing your own PRs is a very good idea.
              
              As insulting as it is to submit an AI-generated PR without any
              effort at review while expecting a human to look it over, it is
              nearly as insulting to not just open the view the reviewer will
              have and take a look. I do this all the time and very often
              discover little things that I didn't see while tunneled into the
              code itself.
       
                aakkaakk wrote 4 days ago:
                Yes! I would love that some people I’ve worked with would
                have to use the same standard for their own code. Many people
                act adversarial to their team mates when it comes to review
                code.
       
                bicolao wrote 4 days ago:
                > I get your point, but reviewing your own PRs is a very good
                idea.
                
                Yes. You just have to be in a different mindset. I look for
                cases that I haven't handled (and corner cases in general). I
                can try to summarize what the code does and see if it actually
                meets the goal, if there's any downsides. If the solution in
                the end turns out too complicated to describe, it may be time
                to step back and think again. If the code can run in many
                different configurations (or platforms), review time is when I
                start to see if I accidentally break anything.
       
                afavour wrote 4 days ago:
                > reviewing your own PRs is a very good idea
                
                It is, but for all the reasons AI is supposed to fix. If I look
                at code I myself wrote I might come to a different conclusion
                about how things should be done because humans are fallible and
                often have different things on their mind. If it's in any way
                worth using an AI should be producing one single correct answer
                each time, rendering self PR review useless.
       
                latexr wrote 4 days ago:
                > reviewing your own PRs is a very good idea.
                
                In the sense that you double check your work, sure. But you
                wouldn’t be commenting and asking for changes, you wouldn’t
                be using the reviewing feature of GitHub or whatever code
                forger you use, you’d simply make the fixes and push again
                without any review/discussion necessary. That’s what I mean.
                
                > open the view the reviewer will have and take a look. I do
                this all the time
                
                So do I, we’re in perfect agreement there.
       
              duskwuff wrote 4 days ago:
              I'm sure the AI service providers are laughing all the way to the
              bank, though.
       
                lobsterthief wrote 4 days ago:
                Probably not since they likely aren’t even turning a profit
                ;)
       
                  rsynnott wrote 3 days ago:
                  "Profit"? Who cares about profit? We're back to dot-com
                  economics now! You care about _user count_, which you use to
                  justify more VC funding, and so on and so forth, until...
                  well, it will probably all be fine.
       
              enraged_camel wrote 4 days ago:
              >> This makes no sense, and it’s absurd anyone thinks it does.
              
              It's a joke.
       
                latexr wrote 4 days ago:
                I doubt that. Check their profile.
                
                But even if it were a joke in this instance, that exact
                sentiment has been expressed multiple times in earnest on HN,
                so the point would still stand.
       
                johnmaguire wrote 4 days ago:
                Check OP's profile - I'm not convinced.
       
              falcor84 wrote 4 days ago:
              > That’s just adding several layers of stupid on top of each
              other and praying that somehow the result is smart.
              
              That is literally how civilization works.
       
                falcor84 wrote 3 days ago:
                Just to explain my brusque comment: the way I see it,
                civilization is populated with a large fraction of individuals
                whose intelligence or conscientiousness I wouldn't trust to
                mind my cactus, but that I'm ok with entrusting a lot more too
                because of the systems and processes offered by society at
                large.
                
                As an example, knowing that a service is offered by a
                registered company with presence in my area gives me the
                knowledge "that they know that I know" that if something goes
                wrong, I can sue them for negligence, possibly up to piercing
                the corporate veil the company and having the directors serve
                prison time. From that I can somewhat rationally derive that if
                the company has been in business offer similar services for
                years, it is likely that they have processes in place to
                maintain a level of professionalism that would lower the risk
                of such lawsuits. And on an organisational level, even if I
                still have good reason to think that most of the employees are
                incompetent, the fact that the company is making it work gives
                me a significantly higher preference in the "result" than I
                would in any individual "stupid" component.
                
                And for a closer-to-home example, the internet is well known to
                be a highly reliable system built from unreliable components.
       
            dickersnoodle wrote 4 days ago:
            One Furby codes and a second one reviews...
       
              gh0stcat wrote 3 days ago:
              This is such a good idea, the ultimate solution is connecting the
              furbies to CI.
       
              shermantanktop wrote 4 days ago:
              Let's red-team this: use Teddy Ruxpin to review, a Tamagotchi can
              build the deployment plan, and a Rock'em Sock'em Robot can
              execute it.
       
            dyauspitr wrote 4 days ago:
            Satire? Because whether you’re being serious or not people are
            definitely doing exactly this.
       
            skrebbel wrote 4 days ago:
            Hahahahah well done :dart-emoji:
       
            i80and wrote 4 days ago:
            Please be doing a bit
       
              lelandfe wrote 3 days ago:
              As for the first question, about AI possibly truncating my
              comments,
       
            rkozik1989 wrote 4 days ago:
            So how do you catch the errors that AI made in the pull request?
            Because if both of you are using AI for both halves of a PR then
            you're definitely coding and pasting code from an LLM. Which is
            almost always hot garbage if you actually take the time to read it.
       
              cjs_ac wrote 4 days ago:
              You can just look at the analytics to see if the feature is
              broken. /s
       
            footy wrote 4 days ago:
            did AI write this comment?
       
              kacesensitive wrote 4 days ago:
              You’re absolutely right!  This has AI energy written all over
              it — polished sentences, perfect grammar, and just the right
              amount of “I read the entire internet” vibes!  But hey, at
              least it’s trying to sound friendly, right?
       
                Narciss wrote 4 days ago:
                This definitely is ai generated LOL
       
            gdulli wrote 4 days ago:
            > You know you can AI review the PR too, don't be such a
            curmudgeon. I have PR's at work I and coworkers fully AI generated
            and fully AI review. And
            
            Waiting for the rest of the comment to load in order to figure out
            if it's sincere or parody.
       
              thatjoeoverthr wrote 4 days ago:
              His agent hit what we in the biz call “max tokens”
       
              jurgenaut23 wrote 4 days ago:
              Ahahah
       
              latexr wrote 4 days ago:
              Considering their profile, I’d say it’s probably sincere.
       
              kacesensitive wrote 4 days ago:
              He must of dropped connection while chatGPT was generating his HN
              comment
       
                Uhhrrr wrote 3 days ago:
                "must have"
       
          latexr wrote 4 days ago:
          > You're telling me I need to use 100% of my brain, reasoning power,
          and time to go over your code, but you didn't feel the need to hold
          yourself to the same standard?
          
          I don’t think they are (telling you that). The person who sends you
          an AI slop PR would be just as happy (probably even happier) if you
          turned off your brain and just merged it without any critical
          thinking.
       
        VladVladikoff wrote 4 days ago:
        Recently I had to give one of my vendors a dressing down about LLM use
        in emails. He was sending me these ridiculous emails where the LLM was
        going off the rails suggesting all sorts of features etc that were
        exploding the scope of the project. I told him he needs to just send
        the bullet notes next time instead of pasting those into ChatGPT and
        pasting the output into an email.
       
          larodi wrote 3 days ago:
          I was shouting to my friend and partner the other day, that he is
          absolutely to ever stop sending me LLM-generated mails, even if the
          best he can come with is full of punctuation and grammar errors.
       
        DonHopkins wrote 4 days ago:
        > lexical bingo machine
        
        I would have written "lexical fruit machine", for its left to right
        sequential ejaculation of tokens, and its amusingly antiquated
        homophobic criminological implication.
        
  HTML  [1]: https://en.wiktionary.org/wiki/fruit_machine
       
        dewey wrote 4 days ago:
        > No, don't use it to fix your grammar, or for translations
        
        I think that's the best use case and it's not AI related as
        spell-checkers and translation integrations exist forever, now they are
        just better.
        
        Especially for non-native speakers that work in a globalized market.
        Why wouldn't they use the tool in their toolbox?
       
          mjr00 wrote 4 days ago:
          > Especially for non-native speakers that work in a globalized
          market. Why wouldn't they use the tool in their toolbox?
          
          My wife is ESL. She's asked me to review documents such as her
          resume, emails, etc. It's immediately obvious to me that it's been
          run through ChatGPT, and I'm sure it's immediately obvious to
          whomever she's sending the email. While it's a great tool to suggest
          alternatives and fix grammar mistakes that Word etc don't catch,
          using it wholesale to generate text is so obvious, you may as well
          write "yo unc gimme a job rn fr no cap" and your odds of impressing a
          recruiter would be about the same. (the latter might actually be
          better since it helps you stand out.)
          
          Humans are really good at pattern matching, even unconsciously. When
          ChatGPT first came out people here were freaking out about how human
          it sounded. Yet by now most people have a strong intuition for what
          sounds ChatGPT-generated, and if you paste a GPT-generated comment
          here you'll (rightfully) get downvoted and flagged to oblivion.
          
          So why wouldn't you use it? Because it masks the authenticity in your
          writing, at a time when authenticity is at a premium.
       
            dewey wrote 4 days ago:
            Having a tool at your disposal doesn't mean you don't have to learn
            how to use it. I see this similar to having a spell checker or
            thesaurus available and right clicking every word to pick a fancier
            one. It will also make you sound inauthentic and fake.
            
            These type of complains about LLMs feel like the same ones people
            probably said about using a typewriter for writing a letter vs. a
            handwritten one saying it loses intimacy and personality.
       
          boscillator wrote 4 days ago:
          Yah, it is very strange to equivocate using AI as a spell checker and
          a whole AI written article. Being charitable, they meant asking the
          AI re-write your whole post, rather than just using it to suggest
          comma placement, but as written the article seems to suggest a blog
          post with grammar errors is more Human™ than one without.
       
          j4yav wrote 4 days ago:
          Because it doesn’t just fix your grammar, it makes you sound
          suspiciously like spam.
       
            ianbicking wrote 4 days ago:
            It does however work just fine if you ask it for grammar help or
            whatever, then apply those edits. And for pretty much the rest of
            the content too: if you have the AI generate feedback, ideas,
            edits, etc., and then apply them yourself to the text, the result
            avoids these pitfalls and the author is doing the work that the
            reader expects and deserves.
       
            thw_9a83c wrote 4 days ago:
            > Because it doesn’t just fix your grammar, it makes you sound
            suspiciously like spam.
            
            This ship sailed a long time ago. We have been exposed to
            AI-generated text content for a very long time without even
            realizing it. If you read a little more specialized web news,
            assume that at least 60% of the content is AI-translated from the
            original language. Not to mention, it could have been AI-generated
            in the source language as well. If you read the web in several
            languages, this becomes shockingly obvious.
       
            orbital-decay wrote 4 days ago:
            No? If you ask it to proofread your stuff, any competent model just
            fixes your grammar without adding anything on its own. At least
            that's my experience. Simply don't ask for anything that involves
            major rewrites, and of course verify the result.
       
              JohnFen wrote 3 days ago:
              > any competent model just fixes your grammar without adding
              anything on its own
              
              Grammatical deviations constitute a large part of an author's
              voice. Removing those deviations is altering that voice.
       
                pessimizer wrote 3 days ago:
                That's the point. Their voice is unintelligible in English, and
                they prefer a voice that English-speakers can understand.
       
              j4yav wrote 4 days ago:
              If you can’t communicate effectively in the language how are
              you evaluating that it doesn’t make you sound like a bot?
       
                orbital-decay wrote 3 days ago:
                Getting your code reviewed doesn't mean you can't code
       
                Philpax wrote 4 days ago:
                Verification is easier than generation, especially for natural
                language.
       
                  ruszki wrote 3 days ago:
                  The amount of time that I and my colleagues had to fight to
                  not rewrite something instead of fixing it tells otherwise.
                  This is a well documented phenomenon for decades now, so
                  it’s definitely not just my experience. I had the same urge
                  when I started coding, and I had to fight it for a long time
                  in myself.
       
            whatsakandr wrote 4 days ago:
            I have a prompt to make it not rewrite, but just point out "hey you
            could rephrase this better." I still keep my tone, but the clanker
            can identify thoughts that are incomplete. Stuff that spell
            chekcer's can't do.
       
            portaouflop wrote 4 days ago:
            I disagree. 
            You can use it to point out grammar mistakes and then fix them
            yourself without changing the meaning or tone of the subject.
       
              YurgenJurgensen wrote 4 days ago:
              Paste passages from Wikipedia featured articles, today’s
              newspapers or published novels and it’ll still suggest style
              changes.  And if you know enough to know to ignore ChatGPTs
              suggestions, you didn’t need it in the first place.
       
                thek3nger wrote 3 days ago:
                > And if you know enough to know to ignore ChatGPTs
                suggestions, you didn’t need it in the first place.
                
                This will invalidate even ispell in vim. The entire point of
                proofreading is to catch things you didn’t notice. Nobody
                would say “you don’t need the red squiggles underlining
                strenght because you already know it is spelled strength.”
       
            dewey wrote 4 days ago:
            It's a tool and it depends on how you use it. If you tell it to fix
            your grammar with minimal intervention to the actual structure it
            will do just that.
       
              kvirani wrote 4 days ago:
              Usually
       
            cubefox wrote 4 days ago:
            Yeah. It's "pick your poison". If your English sounds broken,
            people will think poorly of your text. And if it sounds like LLM
            speak, they won't like it either. Not much you can do. (In a
            limited time frame.)
       
              j4yav wrote 4 days ago:
              I would personally much rather drink the “human who doesn’t
              speak fluently” poison.
       
              yodsanklai wrote 4 days ago:
              LLM are pretty good to fix documents in exactly the way you want.
              At the very least, you can ask it to fix typos, grammar errors,
              without changing the tone, structure and content.
       
              geerlingguy wrote 4 days ago:
              Lately I have more appreciation for broken English and short, to
              the point sentences than the 20 paragraph AI bullet point lists
              with 'proper' formatting.
              
              Maybe someone will build an AI model that's succinct and to the
              point someday. Then I might appreciate the use a little more.
       
                brabel wrote 4 days ago:
                You can ask ai to be succinct and it will be. If you need to
                you can give examples of how it should respond. It works
                amazingly well.
       
                  jdiff wrote 2 days ago:
                  It's extraordinarily hit or miss. I've tried giving
                  instructions to be concise, to only give high level answers,
                  to not include breakdowns or examples or step-by-step
                  instructions unless explicitly requested, and yet "What are
                  my options for running a function whenever a variable changes
                  in C#?" invariably results in a bloated list with examples
                  and step-by-step instructions.
                  
                  The only thing that changed in all of my experimentation with
                  various saved instruction was that sometimes it prepended its
                  bloated examples with "here's a short, concise example:".
       
                YurgenJurgensen wrote 4 days ago:
                This.  AI translations are so accessible now that if you’re
                going to submit machine-translations, you may as well just
                write in your native language and let the reader machine
                translate.  That’s at least accurately representing the
                amount of effort you put in.
                
                I will also take a janky script for a game hand-translated by
                an ESL indie dev over the ChatGPT House Style 99 times out of
                100 if the result is even mostly comprehensible.
       
        latexr wrote 4 days ago:
        This assumes the person using LLMs to put out a blog post gives a
        single shit about their readers, pride, or “being human”. They
        don’t. They care about the view so you load the ad which makes them a
        fraction of a cent, or the share so they get popular so they can
        eventually extract money or reputation from it.
        
        I agree with you that AI slop blog posts are a bad thing, but there are
        about zero people who use LLMs to spit out blog posts which will change
        their mind after reading your arguments. You’re not speaking their
        language, they don’t care about anything you do. They are selfish.
        The point is themselves, not the reader.
        
        > Everyone wants to help each other.
        
        No, they very much do not. There are a lot of scammers and shitty
        entitled people out there, and LLMs make it easier than ever to become
        one of them or increase the reach of those who already are.
       
          YurgenJurgensen wrote 4 days ago:
          Don’t most ad platforms and search engines track bounce rate?  If
          too many users see that generic opening paragraph, bullet list and
          scattering of emoji, and immediately hit back or close, they lose
          revenue.
       
            latexr wrote 4 days ago:
            Assuming most people can detect LLM writing quickly. I don’t
            think that’s true. In this very submission we see people
            referencing cases where colleagues couldn’t detect something is
            written by LLM even after reading everything.
       
          babblingfish wrote 4 days ago:
          If someone puts an LLM generated post on their personal blog, then
          their goal isn't to improve their writing or learn on a new topic.
          Rather, they're hoping to "build a following" because some conman on
          twitter told them it was easy. What's especially hilarious is how
          difficult it is to make money with a blog. There's little incentive
          to chase monetization in this medium, and yet people do it anyways.
       
          JohnFen wrote 4 days ago:
          > They are selfish. The point is themselves, not the reader.
          
          True!
          
          But when I encounter a web site/article/video that has obviously been
          touched by genAI, I add that source to a blacklist and will never see
          anything from it again. If more people did that, then the selfish
          people would start avoiding the use of genAI because using it will
          cause their audience to decline.
       
            latexr wrote 4 days ago:
            > I add that source to a blacklist
            
            Please do tell more. Do you make it like a rule in your adblocker
            or something else?
            
            > If more people did that, then the selfish people would start
            avoiding the use of genAI because using it will cause their
            audience to decline.
            
            I’m not convinced. The effort on their part is so low that even
            the lost audience (which will be far from everyone) is still
            probably worth it.
       
              robin_reala wrote 3 days ago:
              I use Kagi for this: you can block domains from appearing in your
              search results.
              
  HTML        [1]: https://kagi.com/settings/user_ranked
       
              JohnFen wrote 3 days ago:
              I was using "blacklist" in a much more general sense, but here's
              how it actually plays out. Most of my general purpose website
              reading is done through an RSS aggregator. If one of those feeds
              starts using genAI, then I just drop it out of the aggregator. If
              it's a website that I found through web search, then I use Kagi's
              search refinement settings to ensure that site won't come up
              again in my search results. If it's a YouTube channel I subscribe
              to, I unsubscribe. If it's one that YouTube recommended to me, I
              tell YouTube to no longer recommend anything from that channel.
              
              Otherwise, I just remember that particular source as being
              untrustworthy.
       
        the_af wrote 4 days ago:
        What amazes me is that some people think I want to read AI slop in
        their blog that I could have generated by asking ChatGPT directly.
        
        Anyone can access ChatGPT, why do we need an intermediary?
        
        Someone a while back shared, here on HN, almost an entire blog
        generated by (barely touched up) AI text. It even had Claude-isms like
        "excellent question!", em-dashes, the works. Why would anyone want to
        read that?
       
          CuriouslyC wrote 4 days ago:
          In that case, I'd say maybe you didn't have the wisdom to ask the
          question in the first place? And maybe you wouldn't know the follow
          up questions to ask after that? And if the person who produced it
          took a few minutes to fact check, that has value as well.
       
            the_af wrote 4 days ago:
            It's seldom the case that AI slop requires widsom to ask, or is
            fact-checked in any depth other than cursory. Cursory checking of
            AI-slop has effectively zero value.
            
            Or do you remember when Facebook groups or image communities were
            flooded with funny/meme AI-generated images, "The Godfather, only
            with Star Wars", etc? Thank you, but I can generate those
            zero-effort memes myself, I also have access to GenAI.
            
            We truly don't need intermediaries.
       
              CuriouslyC wrote 3 days ago:
              You don't need human intermediates either, what's the point of
              teachers? You can read the original journal articles just fine.
              In fact what's the point of any communication that isn't journal
              articles? Everything else is just recycled slop.
       
                the_af wrote 3 days ago:
                No, that's a false equivalence.
                
                > Everything else is just recycled slop.
                
                No, not everything is slop. AI-slop is slop. The term was
                coined for a reason.
                
                Everyone can ask the AI directly, unlike accessing journals.
                Journals are intermediaries because you don't have direct
                access to the source (or cannot conduct the experiment
                yourself).
                
                Everyone has access to AI at the slop "let's generate blog
                posts and articles" level we're discussing here.
                
                A better analogy than teachers is: I ask a teacher a random
                question, and then I tell it to you with almost no changes,
                with the same voice if the teacher (and you also have access to
                the same teacher). Why? What value do I add? You can ask the
                teacher directly. And doubly so because what I'm asking is not
                some flash of insight, it's random crap instead.
       
          dewey wrote 4 days ago:
          There's blogs that are not meant to be read, but are just content
          marketing to be found by search engines.
       
        xena wrote 4 days ago:
        People at work have fed me obviously AI generated documentation and
        blogposts. I've gotten to the point where I can make fairly accurate
        guesses as to which model generated it. I've started to just reject
        them because the alternative is getting told to rewrite them to "not
        look AI".
       
        elif wrote 4 days ago:
        I feel like this has to be AI generated satire as art
       
          thire wrote 4 days ago:
          Yes, I was almost hoping for a "this was AI-generated" disclaimer at
          the end!
       
        icapybara wrote 4 days ago:
        If they can’t be bothered to write it, why should I be bothered to
        read it?
       
          dist-epoch wrote 3 days ago:
          They used to say judge the message, not the messenger.
          
          But you are saying that is wrong, you should judge the messenger, not
          the message.
       
          CuriouslyC wrote 4 days ago:
          Tired meme. If you can't be bothered to think up an original idea,
          why bother to post?
       
            YurgenJurgensen wrote 4 days ago:
            2+2 doesn’t suddenly become 5 just because you’re bored of 4.
       
              CuriouslyC wrote 3 days ago:
              If you assume that a LLM's expansion of someone's thoughts is
              less their thoughts than someone copy and pasting a tired meme,
              that exposes a pretty fundamental world view divide. I'm ok with
              you just hating AI stuff because it's AI, but have the guts to
              own your prejudice and state it openly -- you're always going to
              hate AI no matter how good it gets, just be clear about that. I
              can't stand people who try to make up pretty sounding reasons to
              justify their primal hatred.
       
                YurgenJurgensen wrote 3 days ago:
                I don’t hate AI, I hate liars.  It’s just that so far, the
                former has proven itself to be of little use to anyone but the
                latter.
       
          thw_9a83c wrote 4 days ago:
          > If they can’t be bothered to write it, why should I be bothered
          to read it?
          
          Isn't that the same with AI-generated source code? If lazy
          programmers didn't bother writing it, why should I bother reading it?
          I'll ask the AI to understand it and to make the necessary changes.
          Now, let's repeat this process over and over. I wonder what would be
          the state of such code over time. We are clearly walking this path.
       
            Ekaros wrote 4 days ago:
            Why would I bother to run it? Why wouldn't I just have AI to read
            it and then provide output on my input?
       
            conception wrote 4 days ago:
            Why would source code be considered the same as a blog post?
       
              thw_9a83c wrote 4 days ago:
              I didn't say the source code is the same as a blog post. I
              pointed out that we are going to apply the "I don't bother"
              approach to the source code as well.
              
              Programming languages were originally invented for humans to
              write and read. Computers don't need them. They are fine with
              machine code. If we eliminate humans from the coding process, the
              code could become something that is not targeted for humans. And
              machines will be fine with that too.
       
          bryanlarsen wrote 4 days ago:
          Because the author has something to say and needs help saying it?
          
          pre-AI scientists would publish papers and then journalists would
          write summaries which were usually misleading and often wrong.
          
          An AI operating on its own would likely be no better than the
          journalist, but an AI supervised by the original scientist quite
          likely might do a better job.
       
            kirurik wrote 4 days ago:
            I agree, I think there is such a thing as AI overuse, but I would
            rather someone uses AI to form their points more succinctly than
            for them to write something that I can't understand.
       
          AlienRobot wrote 4 days ago:
          Now that I think about it, it's rather ironic that's a quote because
          you didn't write it.
       
          alxmdev wrote 4 days ago:
          Many of those who can't be bothered to write what they publish
          probably can't be bothered to read it themselves, either. Not by
          humans and certainly not for humans.
       
          abixb wrote 4 days ago:
          I'm sure lots of "readers" of such articles fed it to another AI
          model to summarize it, thereby completely bypassing the usual human
          experience of writing and then careful (and critical) reading and
          parsing of the article text. I weep for the future.
          
          Also, reminds me of this cartoon from March 2023. [0]
          
          [0]
          
  HTML    [1]: https://marketoonist.com/2023/03/ai-written-ai-read.html
       
            array_key_first wrote 3 days ago:
            Are people doing this or is this just what, like, Apple or someone
            is telling us people are doing?
            
            Because I've never seen anyone actually use a summarizing AI
            willingly. And especially not for blogs and other discretionary
            activities.
            
            That's like getting the remote from the hit blockbuster "Click"
            starring Adam Sandler (2006) and then using it to skip sex. Just
            doesn't make any sense.
       
            trthomps wrote 4 days ago:
            I'm curious if the people who are using AI to summarize articles
            are the same people who would have actually read more than the
            headline to begin with.  It feels to me like the sort of person who
            would have read the article and applied critical thinking to it is
            not going to use an AI summary to bypass that since they won't be
            satisfied with it.
       
        edoceo wrote 4 days ago:
        I do like it for taking the hour long audio/video and creating a
        summary that, even if poorly written, can indicate to me wether I'd
        like to listen to the hour of media.
       
        4fterd4rk wrote 4 days ago:
        It's insulting but I also find it extremely concerning that my younger
        colleagues can't seem to tell the difference. An article will very
        clearly be AI slop and I'll express frustration, only to discover that
        they have no idea what I"m talking about.
       
          otikik wrote 4 days ago:
          This is Rick and Morty S1E4 and we are all becoming Jerry. [1] [1] !
          
  HTML    [1]: https://en.wikipedia.org/wiki/M._Night_Shaym-Aliens
       
          ehutch79 wrote 4 days ago:
          In the US, (internet fact, grain of salt, etc) there is a trend where
          students, and now adults, are growing increasingly functionally
          illiterate.
       
          noir_lord wrote 4 days ago:
          I'd be curious to do a general study to see what percentage of humans
          can spot AI written content vs human written content on the same
          subject.
          
          Specifically is there any correlation between people who have always
          read a lot as I do and people who don't.
          
          My observation (anecdota) is that the people I know who read heavily
          are much better at and much more against AI slop vs people who don't
          read at all.
          
          Even when I've played with the current latest LLM's and asked them
          questions, I simply don't like the way they answer, it feels off
          somehow.
       
            mediaman wrote 4 days ago:
            I both read a fair amount (and long books, 800-1,000 page classic
            Russian novels, that kind of thing) and use LLMs.
            
            I quite like using LLMs to learn new things. But I agree: I can't
            stand reading blog posts written by LLMs. Perhaps it is about
            expectations. A blog post I am expecting to gain a view into an
            individual's thinking; for an AI, I am looking into an abyss of
            whirring matrix-shaped gears.
            
            There's nothing wrong with the abyss of matrices, but if I'm at a
            party and start talking with someone, and get the whirring sound of
            gears instead of the expected human banter, I'm a little disturbed.
            And it feels the same for blog content: these are personal
            communications; machines have their place and their use, but if I
            get a machine when I'm expecting something personal, it counters
            expectations.
       
            strix_varius wrote 4 days ago:
            I agree, and I'm not sure why it feels off but I have a theory.
            
            AI is good at local coherence, but loses the plot over longer
            thoughts (paragraphs, pages). I don't think I could identify AI
            sentences but I'm totally confident I could identify an AI book.
            
            This includes both opening a large text in a way of thinking that
            isn't reflected several paragraphs later, and also maintaining a
            repetitive "beat" in the rhythm of writing that is fine locally but
            becomes obnoxious and repetitive over longer periods. Maybe that's
            just regression to the mean of "voice?"
       
          jermaustin1 wrote 4 days ago:
          For me it is everyone that has lost the ability to respond to a work
          email without first having it rewritten by some LLM somewhere. Or my
          sister who will have ChatGPT give a response to a text message if she
          doesn't feel like reading the 4-5 sentences from someone.
          
          I think the rates of ADHD are going to go through the roof soon, and
          I'm not sure if there is anything that can be done about it.
       
            mrguyorama wrote 3 days ago:
            ADHD is a difference in how the brain functions and is constructed.
            
            It is physiological.
            
            I don't think any evidence exists that you can cause anyone to
            become neurodivergent except by traumatic brain injury
            
            TikTok does not "make" people ADHD. They might struggle to let
            themselves be bored and may be addicted to quick fixes of dopamine,
            but that is not what ADHD is. ADHD is not an addiction to dopamine
            hits. ADHD is not an inability to be bored.
            
            TikTok for example will not give you the kinds of tics and lack of
            proprioception that is common in neurodivergent people. Being
            addicted to Tiktok will never give you that absurd experience where
            your brain "hitches" while doing a task and you rapidly oscillate
            between progressing towards one task vs another. Being habituated
            to check your phone at every down moment does not cause you to be
            unable to ignore sensory input because your actual sensory
            processing machinery in your brain is not functioning normally.
            Getting addicted to tiktok does not give you a child's handwriting
            despite decades of practice. If you do not already have significant
            stimming and jitter symptoms, Tiktok will not make you develop
            them.
            
            You cannot learn to be ADHD.
       
            larodi wrote 3 days ago:
            ADHD is going to very soon be a major pandemic. Not one we talk
            about too much, as there are plenty of players ready to feed
            unlimited supplies of Concerta, Ritalin and Adderal among others.
       
            noir_lord wrote 4 days ago:
            > I think the rates of ADHD are going to go through the roof soon
            
            As a diagnosed medical condition I don't know, as people having
            seemingly shorter and short attention spans we are seeing it
            already, TikTok and YT shorts and the like don't help, we've
            weaponised inattention.
       
          Insanity wrote 4 days ago:
          Or worse - they can tell the difference but don’t think it matters.
       
            rco8786 wrote 4 days ago:
            I see a lot of that also.
       
        noir_lord wrote 4 days ago:
        I just hit the back button as soon as my "this feels like AI" sense
        tingles.
        
        Now you could argue but you don't know it was AI it could just be
        really mediocre writing - it could indeed but I hit the back button
        there as well so it's a wash either way.
       
          shadowgovt wrote 4 days ago:
          I do the same, but for blog posts complaining about AI.
          
          At this point, I don't know there's much more to be said on the
          topic. Lines of contention are drawn, and all that's left is to see
          what people decide to do.
       
          embedding-shape wrote 4 days ago:
          I do the same almost, but use "this isn't interesting/fun to read"
          and don't really care if it was written by AI or not, if it's
          interesting/fun it's interesting/fun, and if it isn't, it isn't. Many
          times it's obvious it's AI, but sometimes as you said it could just
          be bad, and in the end it doesn't really matter, I don't want to
          continue reading it regardless.
       
          rco8786 wrote 4 days ago:
          There's definitely an uncanny valley with a lot of AI. But also, it's
          entirely likely that lots of what we're reading is AI generated and
          we can't tell at all. This post could easily be AI (it's not, but it
          could be)
       
            Waterluvian wrote 4 days ago:
            Ah the portcullis to the philosophical topic of, “if you
            couldn’t tell, does that demonstrate that authenticity doesn’t
            matter?”
       
              noir_lord wrote 4 days ago:
              I think it does, We could get a robotic arm to paint in the style
              of a Dutch master but it'd not be a Dutch master.
              
              I'd sooner have a ship painting from the little shop in the
              village with the little old fella who paints them in the shop
              than a perfect robotic simulacrum of a Rembrandt.
              
              Intention matters but it matters less sometimes but I think it
              matters.
              
              Writing is communication, it's one of the things we as humans do
              that makes us unique - why would I want to reduce that to a
              machine generating it or read it when it has.
       
                yoyohello13 wrote 4 days ago:
                I’ve been learning piano and I’ve noticed a similar thing
                with music. You can listen to perfect machine generated
                performances of songs and there is just something missing. A
                live performance even of a master pianist will have little
                ‘mistakes’ or interpretations that make the whole
                performance so much more enjoyable. Not only that, but just
                knowing that a person spent months drilling a song adds
                something.
       
                  Waterluvian wrote 3 days ago:
                  Two things this great comment reminds me of:
                  
                  I've been learning piano too, and I find more joy in
                  performing a piece poorly, than listening to it played
                  competently.  My brother asked me why I play if I'm just
                  playing music that's already been performed (a leading
                  question, he's not ignorant). I asked him why he plays hockey
                  if you can watch pros play it far better.  It's the journey,
                  not the destination.
                  
                  I've been (re-)re-re-watching Star Trek TNG and Data touches
                  on this issue numerous times, one of which is specifically
                  about performing violin (but also reciting Shakespeare).  And
                  the message is what you're sharing: to recite a piece with
                  perfect technical execution results an in imperfect
                  performance.  It's the _human_ aspects that lend a piece deep
                  emotion that other humans connect with, often without being
                  able to concretely describe why. Let us feel your emotions
                  through your work. Everyting written on the page is just the
                  medium for those emotions. Without emotion, your perfectly
                  recited piece is a delivered blank message.
       
                    Peritract wrote 3 days ago:
                    > Ah, but a man's reach should exceed his grasp,
                    Or what's a heaven for?
                    
  HTML              [1]: https://www.poetryfoundation.org/poems/43745/andre...
       
                cubefox wrote 4 days ago:
                That's also why in The Matrix (1999) the main character takes
                the red pill (facing grim reality) rather than the blue pill
                (forgetting about grim reality and going back to a happy
                illusion).
       
                  noir_lord wrote 4 days ago:
                  Aye I always thought the character of Cypher was tragic as
                  well, his reality sucked so much that he'd consciously go
                  back and live a lie he doesn't remember and then forget he
                  made that choice.
                  
                  The Matrix was and is fantastic on many levels.
       
                    DaiPlusPlus wrote 2 days ago:
                    It’s a shame they never made any sequels
       
       
   DIR <- back to front page