URI: 
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   Believe the Checkbook
       
       
        TheCraiggers wrote 2 hours 31 min ago:
        How do I know they didn't buy them just to make sure their competitors
        couldn't?
       
          kubb wrote 2 hours 28 min ago:
          Can anyone tell me the leading theory explaining the acquisition?
          
          I can’t see how buying a runtime for the sake of Claude Code makes
          sense.
       
        faxmeyourcode wrote 2 hours 40 min ago:
        While I agree with the premise of the article, even if it was a bit
        shallow, this claim made at the beginning is also still true:
        
        > Everyone’s heard the line: “AI will write all the code;
        engineering as you know it is finished.”
        
        Software engineering pre-LLMs will never, ever come back. Lots of folks
        are not understanding that. What we're doing at the end of 2025 looks
        so much different than what we were doing at the end of 2024.
        Engineering as we knew it a year or two ago will never return.
       
          maccard wrote 26 min ago:
          Does it?
          
          I use AI as a smart auto complete - I’ve tried multiple tools on
          multiple models and I still _regularlt_ have it dump absolute
          nonsense into my editor - in thr best case it’s gone on a tangent,
          but in the most common case it’s assumed something (often times
          directly contradicting what I’ve asked it to do), gone with it, and
          lost the plot along the way. Of course when I correct it it says
          “you’re right, X doesn’t exist so we need to do X”…
          
          Has it made me faster? Yes. Had it changed engineering - not even
          close. There’s absolutely no world where I would trust what I’ve
          seen out of these tools to run in the real world even with
          supervision.
       
        drcode wrote 4 hours 33 min ago:
        The bun acquisition is driven by current AI capabilities.
        
        This argument requires us to believe that AI will just asymptote and
        not get materially better.
        
        Five years from now, I don't think anyone will make these kinds of
        acquisitions anymore.
       
          nitwit005 wrote 1 hour 25 min ago:
          An Anthropic engineer was getting some attention for saying six
          months: [1] I assume this is at least partially a response to that.
          They wouldn't buy a company now if it would actually happen that
          fast.
          
  HTML    [1]: https://www.reddit.com/r/ClaudeAI/comments/1p771rb/anthropic...
       
          bigstrat2003 wrote 3 hours 19 min ago:
          > This argument requires us to believe that AI will just asymptote
          and not get materially better.
          
          It hasn't gotten materially better in the last three years. Why would
          it do so in the next three or five years?
       
            bitwize wrote 2 hours 30 min ago:
            Deep learning and transformers have given step functions in AI's
            capabilities. It may not happen, but it's reasonable to expect
            another step-function development soon.
       
          0x3f wrote 4 hours 23 min ago:
          > This argument requires us to believe that AI will just asymptote
          and not get materially better.
          
          That's not what asymptote means.  Presumably what you mean is the
          curve levelling off, which it already is.
       
            SoftTalker wrote 3 hours 37 min ago:
            This seems overly pedantic. The intended meaning is clear.
       
        zamadatix wrote 4 hours 57 min ago:
        Something about the way the article sets up the conversation nags at me
        a bit - even though it concludes with statements and reasoning I
        generally agree quite well with. It sets out what it wants to argue
        clearly at the start:
        
        > Everyone’s heard the line: “AI will write all the code;
        engineering as you know it is finished... The Bun acquisition blows a
        hole in that story.”
        
        But what the article actually discusses and demonstrates by the end of
        the article is how the aspects of engineering beyond writing the code
        is where the value in human engineers is at this point. To me that
        doesn't seem like an example of a revealed preference in this case. If
        you take it back to the first part of the original quote above it's
        just a different wording for AI being the code writer and engineering
        being different.
        
        I think what the article really means to drive against is the
        claim/conclusion "because AI can generate lots of code we don't need
        any type of engineer" but that's just not what the quote they chose to
        set out against is saying. Without changing that claim the acquisition
        of Bun is not really a counterexample, Bun had just already changed the
        way they do engineering so the AI wrote the code and the engineers did
        the other things.
       
          fwip wrote 3 hours 21 min ago:
          I mean, it smells an AI slop article, so it's hard to expect much
          coherence.
       
            fwip wrote 46 min ago:
            I guess y'all disagree?
            
            > The Bun acquisition blows a hole in that story.
            
            > That contradiction is not a PR mistake. It is a signal.
            
            > The bottleneck isn’t code production, it is judgment.
            
            > They didn’t buy a pile of code. They bought a track record of
            correct calls in a complex, fast-moving domain.
            
            > Leaders don’t express their true beliefs in blog posts or
            conference quotes. They express them in hiring plans, acquisition
            targets, and compensation bands.
            
            Not to mention the gratuitous italics-within-bold usage.
       
          croes wrote 4 hours 32 min ago:
          But the engineers can do it because they have written lots of code
          before.
          Where will these engineers get their experience in the future.
          
          And what about vibe coding?
          The whole point and selling point of many AI companies is that you
          don’t need experience as a programmer.
          
          So they sell something that isn’t true, it’s not FSD for coding
          but driving assistance.
       
            zamadatix wrote 4 hours 20 min ago:
            These are all things I'd rather have seen the article set out to
            talk about as well, instead it opens up to disprove a statement
            saying AI can write the coding portion of the engineering problem
            by means of showing it being used that way with Bun to mean
            Anthropic must not actually think that.
       
        jollyllama wrote 5 hours 6 min ago:
        "Believe the checkbook? Why do that when I can get pump-faked into
        strip-mining my engineering org?"- VPs everywhere
       
        RandallBrown wrote 5 hours 22 min ago:
        > The bottleneck isn’t code production, it is judgment.
        
        It always surprises me that this isn't obvious to everyone. If AI wrote
        100% of the code that I do at work, I wouldn't get any more work done
        because writing the code is usually the easy part.
       
          bibimsz wrote 21 min ago:
          I thought you were going to point how this phrase (and others) make
          it painfully obvious this article was written by AI.
       
          Quothling wrote 1 hour 0 min ago:
          I think it depends on the sort of work you do. We had some hubspot
          integration which hadn't been touched for three years break. Probably
          because someone at hubspot sunset their v1 api a few weeks too
          early... Our internal AI tool that I've build my own agents on
          updated our data transfer service to use the v3 api. It also added
          typing, but kept the rather insane way of delivering the data
          since... well... since it's worked fine for 3 years. It's still not a
          great piece of software that runs for us. It's better now than it was
          yesterday though and it'll now go back to just delivering business
          value in it's extremely imperfect form.
          
          All I had to do was a two line prompt, and accept the pull request.
          It probably took 10 minutes out of my day, which was mostly the
          people I was helping explaining what they thought was wrong. I think
          it might've taken me all day if I had to go through all the code and
          the documentation and fixed it. It might have taken me a couple of
          days because I probably would've made it less insane.
          
          For other tasks, like when I'm working on embedded software using AI
          would slow me down significantly. Except when the specifications are
          in German.
       
          phantasmish wrote 1 hour 37 min ago:
          At my company doubling the writing-code part of software projects
          might speed them up 5%. I think even that’s optimistic.
          
          Imperfectly fixing obvious problems in our processes could gain us
          20%, easy.
          
          Which one are we focusing on? AI. Duh.
       
          xnx wrote 1 hour 59 min ago:
          Lots of people have good judgement but don't know the arcane spells
          to cast to get a computer to do what they want.
       
          skybrian wrote 3 hours 58 min ago:
          I'm retired now, but I spent many hours writing and debugging code
          during my career. I believed that implementing features was what I
          was being paid to do. I was proud of fixing difficult bugs.
          
          A shift to not writing code (which is apparently sometimes possible
          now) and managing AI agents instead is a pretty major industry
          change.
       
          linhns wrote 4 hours 16 min ago:
          Well you should be surprised by the number of people who do not know
          this. Klarna is probably the most popular example where the CEO was
          all about creating more code, then fired everyone before regretting
       
          gowld wrote 4 hours 51 min ago:
          I don't understand this thinking.
          
          How many hours per week did you spend coding on your most recent
          project? If you could do something else during that time, and the
          code still got written, what would you do?
          
          Or are you saying that you believe you can't get that code written
          without spending an equivalent amount of time describing your
          judgments?
       
            layer8 wrote 3 hours 39 min ago:
            > Or are you saying that you believe you can't get that code
            written without spending an equivalent amount of time describing
            your judgments?
            
            It’s sort of the opposite: You don’t get to the proper
            judgement without playing through the possibilities in your mind,
            part of which is accomplished by spending time coding.
       
            jgeada wrote 3 hours 39 min ago:
            All you did was changing the programming language from (say) Python
            to English. One is designed to be a programming language, with few
            ambiguities etc. The other is, well, English.
            
            Speed of typing code is not all that different than the speed of
            typing English, even accounting for the volume expansion of English
            -> . And then, of course, there is the new extra cost of then
            reading and understanding whatever code the AI wrote.
       
              rootusrootus wrote 1 hour 58 min ago:
              Exactly.  LLMs are faster for me when I don't care too much about
              the exact form the functionality takes.  If I want precise
              results, I end up using more natural language to direct the LLM
              than it takes if I just write that part of the code myself.
              
              I guess we find out which software products just need to be 'good
              enough' and which need to match the vision precisely.
       
            RandallBrown wrote 4 hours 22 min ago:
            In my experience (and especially at my current job) bottlenecks are
            more often organizational than technical. I spend a lot of time
            waiting for others to make decisions before I can actually proceed
            with any work.
            
            My judgement is built in to the time it takes me to code. I think I
            would be spending the same amount of time doing that while
            reviewing the AI code to make sure it isn't doing something silly
            (even if it does technically work.)
            
            A friend of mine recently switched jobs from Amazon to a small AI
            startup where he uses AI heavily to write code. He says it's
            improved his productivity 5x, but I don't really think that's the
            AI. I think it's (mostly) the lack of bureaucracy in his small 2 or
            3 person company.
            
            I'm very dubious about claims that AI can improve productivity so
            much because that just hasn't been my experience. Maybe I'm just
            bad at using it.
       
              fragmede wrote 2 min ago:
              Does voice transcription count as AI? I'm an okay typer, but
              being able to talk to my computer, in English, is definitely part
              of the productivity speed up for me. Even though it struggles to
              do css because css is the devil, being able to yell at my
              computer and have it actually do things is cathartic in ways I
              never thought possible.
       
            kibwen wrote 4 hours 34 min ago:
            "Writing code" is not the goal. The goal is to design a coherent
            logical system that achieves some goal. So the practice of
            programming is in thinking hard about what goal I want to achieve,
            then thinking about the sort of logical system that I could design
            that would allow me to verifiably achieve that goal, then actually
            banging out the code that implements the abstract logical system
            that I have in my head, then iterating to refine both the abstract
            system and its implementation. And as a result of being the one who
            produced the code, I have certainty that the code implements the
            system I have in mind, and that the system it represents is for for
            the purpose of achieving the original goals.
            
            So reducing the part where I go from abstract system to concrete
            implementation only saves me time spent typing, while at the same
            time decoupling me from understanding whether the code actually
            implements the system I have in mind. To recover that coupling, I
            need to read the code and understand what it does, which is often
            slower than just typing it myself.
            
            And to even express the system to the code generator in the first
            place still requires me to mentally bridge the gap between the goal
            and the system that will achieve that goal, so it doesn't save me
            any time there.
            
            The exceptions are things where I literally don't care whether the
            outputs are actually correct, or they're things that I can rely on
            external tools to verify (e.g. generating conformance tests), or
            they're tiny boilerplate autocomplete snippets that aren't trying
            to do anything subtle or interesting.
       
              ryandrake wrote 3 hours 31 min ago:
              The actual act of typing code into a text editor and building it
              could be the least interesting and least valuable part of
              software development. A developer who sees their job as "writing
              code" or a company leader who sees engineers' jobs as "writing
              code" is totally missing where the value is created.
              
              Yes, there is artistry, craftsmanship, and "beautiful code" which
              shouldn't be overlooked. But I believe that beautiful code comes
              from solid ideas, and that ugly code comes from flawed ideas. So,
              as long as the (human-constructed) idea is good, the code
              (whether it is human-typed or AI-generated) should end up
              beautiful.
       
                RunSet wrote 1 hour 32 min ago:
                Raising the question: Where is the beautiful machine-generated
                code?
       
            scott_w wrote 4 hours 40 min ago:
            I think OP is closer to the latter. How I typically have been using
            Copilot is as a faster autocomplete that I read and tweak before
            moving on. Too many years of struggling to describe a task to Siri
            left me deciding “I’ll just show it what I want” rather than
            tell.
       
          add-sub-mul-div wrote 4 hours 56 min ago:
          I'll stare at a blank editor for an hour with three different
          solutions in my head that I could implement, and type nothing until a
          good enough one comes to mind that will save/avoid time and trouble
          down the road. That last solution is not best for any simple reason
          like algorithmic complexity or anything that can be scraped from web
          sites.
       
            aaroninsf wrote 2 hours 35 min ago:
            No shade on your skills, but for most problems, this is already
            false; the solutions have already been scraped.
            
            All OSS has been ingested, and all the discussion in forum like
            this about it, and the personal blog posts and newsletters about
            it; and the bug tracking; and theh pull requests, and...
            
            and training etc. is only going to get better and filtering out
            what is "best."
       
              add-sub-mul-div wrote 1 hour 24 min ago:
              The point is that the best solution is based on specific context
              of my situation and the right judgment couldn't be known by
              anyone outside of my team/org.
       
              al_borland wrote 1 hour 56 min ago:
              A vast majority of the problems I’m asked to solve at work do
              not have open-source code I can simply copy or discussion forums
              that already decided the best answer. Enterprise customers rarely
              put that stuff out there. Even if they did, it doesn’t account
              for the environment the solution sit in, possible future
              integrations, off-the-wall requests from the boss, or knowing
              that internal customer X is going to want some other wacky thing,
              so we need to make life easy on our future selves.
              
              At best, what I find online are basic day 1 tutorials and proof
              on concept stuff. None of it could be used in production where we
              actually need to handle errors and possible failure situations.
       
        neilv wrote 6 hours 11 min ago:
        > Treat AI as force multiplication for your highest-judgment people.
        The ones who can design systems, navigate ambiguity, shape strategy,
        and smell risk before it hits. They’ll use AI to move faster, explore
        more options, and harden their decisions with better data.
        
        Clever pitch.  Don't alienate all the people who've hitched their
        wagons to AI, but push valuing highly-skilled ICs as an actionable
        leadership insight.
        
        Incidentally, strategy and risk management sound like a pay grade bump
        may be due.
       
        conductr wrote 6 hours 15 min ago:
        People speak in relative terms and hear in absolutes. Engineers will
        never completely vanish, but it will certainly feel like it if labor
        demand is reduced enough.
        
        Technically, there’s still a horse buggy whip market, an abacus
        market, and probably anything else you think technology consumed.
        It’s just a minuscule fraction of what it once was.
       
          marcosdumay wrote 4 hours 43 min ago:
          > but it will certainly feel like it if labor demand is reduced
          enough
          
          All the last productivity multipliers in programming led to increased
          demand. Do you really think the market is saturated now? And what
          saturated it is one of the least impactful "revolutionary" tools we
          got in our profession?
          
          Keep in mind that looking at statistics won't lead to any real
          answer, everything is manipulated beyond recognition right now.
       
        hapless wrote 6 hours 22 min ago:
        The ten dollar word for this is “revealed preferences”
       
          recursive wrote 5 hours 55 min ago:
          I learned that phrase from one of the bold sentences in this article.
       
       
   DIR <- back to front page