URI: 
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   Some software bloat is OK
       
       
        crq-yml wrote 4 hours 55 min ago:
        Bloat mostly reflects Conway's law, with the outcome of it being that
        you're building towards the people you're talking to.
        
        If you build towards everyone, you end up with a large standard like
        Unicode or IEEE 754. You don't need everything those standards have for
        your own messages or computations, sometimes you find them counter to
        your goal in fact, and they end up wasting transistors, but they are
        convenient enough to be useful defaults, convenient enough to store
        data that is going to be reused for something else later, and therefore
        they are ubiquitous in modern computing machines.
        
        And when you have the specific computation in mind - an application
        like plotting pixels or ballistic trajectories - you can optimize the
        heck out of it and use exactly the format and features needed and get
        tight code and tight hardware.
        
        But when you're in the "muddled middle" of trying to model information
        and maybe it uses some standard stuff but your system is doing
        something else with it and the business requirements are changing and
        the standards are changing too and you want it to scale, then you end
        up with bloat. Trying to be flexible and break up the system into
        modular bits doesn't really stave this off so much as it creates a
        Whack-a-Mole of displaced complexity. Trying to use the latest tools
        and languages and frameworks doesn't solve this either, except where
        they drag you into a standard that can successfully accommodate the
        problem. Many languages find their industry adoption case when a
        "really good library" comes out for it, and that's a kind of informal
        standardizing.
        
        When you have a bloat problem, try to make a gigantic table of
        possibilities and accept that it's gonna take a while to fill it in.
        Sometimes along the way you can discover what you don't need and make
        it smaller, but it's a code/docs maturity thing. You don't know without
        the experience.
       
        GuB-42 wrote 6 hours 38 min ago:
        In the article, many tradeoffs are mentioned. The thing is, many of
        these tradeoffs are necessary to support the bloat, that is, bloat
        needs bloat.
        
        - Layers & frameworks: We always had some layers and frameworks, the
        big one being the operating system. The problem is that now, instead of
        having these layers shared between applications (shared libraries, OS
        calls, etc...), every app wants to do their own thing and they all ship
        their own framework. For the calculator example, with the fonts, common
        controls, rendering code, etc... the global footprint was probably
        several MBs even in the early days, but the vast majority of it was
        shared with other applications and the OS shell itself, resulting in
        only a few extra kB for the calculator.
        
        - Security & isolation: That's mostly the job of the OS and even the
        hardware. But the one reason why we need security so much is that the
        more bloated your code is, the more room there is for vulnerabilities.
        We don't need to isolate components that don't exist in the first
        place.
        
        - Robustness & error handling / reporting: The less there is, the less
        can go wrong, so more robust and less errors to handle.
        
        - Globalization & accessibility: That's true that it adds some bloat,
        however, that's something that the OS should take care of. If everyone
        uses the same shared GUI toolkit, only it has to deal with these
        issues. Note that many of these problems were addressed in the Windows
        9x era.
        
        - Containers & virtualization: Containerization is a solution to
        dependency hell and non-portable code, you carry your entire
        environment with you so you don't have to adjust to a new environment.
        The more dependencies you have, i.e. the most bloat, the more you need
        it. And going back to security and accessibility, since you are now
        shipping your environnement, you don't benefit from system-wide updates
        that address these issues.
        
        - Engineering trade-offs: that is computers are cheap, developers are
        expensive. We are effectively trading time to hand-craft lightweight
        optimized software to keeping up with the bloat.
        
        I get the author's point, but I believe that most of it is
        self-inflicted. I remember the early days of Android. I had a Nexus
        One, 512MB RAM / 1GHz single core CPU / 512MB Flash + 4GB MicroSD and
        it could do most of what I am doing now with a phone that is >10x more
        powerful in every aspect. And that's with internationalization, process
        isolation, the JVM, etc... Not only that but Google was much smaller
        back then, so much for lowering development costs.
       
        paulddraper wrote 7 hours 19 min ago:
        > Thanks to Microsoft abusing their position with GitHub to perform the
        legally questionable act of stealing billions of lines of GPL code
        
        I think this a strawman?
        
        But it's pretty ridiculous if anyone believes this.
        
        FOSS code can't be "stolen." The whole point of GPL and free software
        movement is that software should be free to use and modify.
       
          OkayPhysicist wrote 7 hours 15 min ago:
          FOSS code can absolutely be stolen, if it's usage is in violation of
          the contract that it is shared under. In the case of the GPL, that
          means granting your users the same freedoms you were granted, which
          last I checked "Open"AI absolutely is not doing.
       
        mg0x7BE wrote 9 hours 9 min ago:
        Back in the good old days, I used to play Quake 1 multiplayer
        (QuakeWorld) a lot on Internet servers. You could place an
        "autoexec.cfg" file in Quake folder, that automatically executed
        commands after launching the game, including /connect [IP Address]
        
        The time needed from the moment you launched the game (clicked on the
        .exe) to the moment you entered the server (to the map view) with all
        assets 100% loaded was about 1 second. Literally! You click the icon on
        your desktop and BAM! you're already on the server and you can start
        shooting. But that was written by John Carmack in C :-)
        
        From other examples - I have a "ModRetro Chromatic" at home which is
        simply an FPGA version of the Nintendo Game Boy. On this device, you
        don't see the falling "Nintendo" text with the iconic sound known from
        normal Game Boys. When I insert a cartridge and flip the Power switch,
        I'm in the game INSTANTLY. There's simply absolute zero delay here. You
        turn it on and you're in the game, literally just like with that Quake.
        
        For comparison - I also have a Steam Deck, whose boot time is so long
        that I sometimes finish my business on the toilet before it even starts
        up. The difference is simply colossal between what I remember from the
        old days and what we have today. On old Windows 2000, everything seemed
        lighter than on modern machines. I really miss that.
       
          bitwize wrote 6 hours 44 min ago:
          Game carts also have much faster access times than just about
          anything else contemporary because they're just ROM. If Linux,
          OpenGL, Vulkan, etc. were in the ROM of your Steam Deck, and your
          games were on ROM (not flash) carts, that'd boot up instantly too.
          
          Windows 2000 boots up fast on modern hardware. You're looking through
          rose-colored-ass glasses if you think it booted up that quick on the
          hardware available at the time of release. Windows NT was a pig in
          its day, but at least it was a clean pig, free of spyware and other
          unnecessary crapware (unless you were like a client site I visited,
          and just let Bonzi Buddy, Comet Cursor, and such run rampant across
          your sensitive corporate workstations).
       
          seba_dos1 wrote 8 hours 18 min ago:
          A system like SteamOS can be made to boot within seconds - but it
          takes some effort to set it up like that and I don't think anybody
          there cares enough when most people would just put the Deck to sleep
          where it wakes up in a single second.
       
        1313ed01 wrote 11 hours 41 min ago:
        One thing that bothers me about extreme bloat is that I do not want
        every app to push against the limit of what my computer can handle.
        It's fine if some game or compiler or such complex software do that
        every now and then. A common pro-bloat argument is that software should
        make use of all the hardware instead of leaving it idle, and that makes
        sense in some contexts. But 99% of the time I would prefer smaller
        things that I could have lots of and never have to worry about disk or
        RAM use.
       
        jerdman76 wrote 11 hours 54 min ago:
        This article is awesome - I love the meme
        
        As a former CS major (30 years ago) that went into IT for my first
        career, I've wondered about bloat and this article gave me the layman
        explanation.
        
        I am still blown away by the comparison of pointing out that the WEBP
        image of SuperMario is larger in size than the Super Mario game itself!
       
          masfuerte wrote 11 hours 33 min ago:
          The Mario picture is very misleading.  It's not lossless.  It has way
          too many colours.  An actual lossless screenshot (albeit not as wide)
          is only 3KB:
          
  HTML    [1]: https://en.wikipedia.org/wiki/File:NES_Super_Mario_Bros.png
       
            senfiaj wrote 8 hours 30 min ago:
            Sorry. My bad. Looks like the size is also very sensitive to the
            method of compression and software as well (regardless of being PNG
            or WEBP). I found another PNG picture here [1] it is 64KiB. When
            you stretch the image, it's also likely to add kilobytes. I guess I
            need to update the image.
            
            But anyways, I think it's still very demonstrative when an entire
            game size is smaller than its picture. Also consider that even your
            tiny PNG example (3.37KiB) still cannot fit into the the RAM / VRAM
            of a NES console which shows the contrast between these eras in
            regards of amounts of memory.
            
  HTML      [1]: https://www.freecodecamp.org/news/where-do-all-the-bytes-c...
       
              masfuerte wrote 5 hours 8 min ago:
              > I found another PNG picture here
              
              That image has a similar problem to yours.  It has been scaled up
              using some kind of interpolation which introduces a load of extra
              colours to smooth the edges.  This is not a great fit for PNG,
              which is why it is 64KB.
              
              The article claims that it is only 5.9KB.  I guess it was that
              small originally and it's been mangled by the publishing process.
       
                senfiaj wrote 1 hour 4 min ago:
                I think a more realistic comparison is a BMP image (16 or 256
                color) because the game didn't compress the image the way PNG
                or WebP does. When I converted that tiny ~3KiB image to 16
                color BMP the size was 28.1KiB, for 256 color it was 57KiB. But
                some colors were lost.
                
                Anyways I don't think we can have 100% apple to apple
                comparison, because the game used a different compression and
                rendering technique. Also consider the actual amount of RAM /
                VRAM the images are occupying. In RAM / VRAM they are probably
                in a decompressed form which is much more memory.
       
        metalman wrote 11 hours 58 min ago:
        unfortunately the article is very offensive to anyone living outside of
        major urban centers, it presumes that along with our daily cake, that
        unlimited data and means to aquire it are only to be mentioned as
        trivialitys, if at all.
        rural devitalization is accelerating, with another article here
        describing how unreliable electrical power is in Calfornias country
        side
        I wont get into the details, but please imagine all the fun to be had
        attempting to turn off a win11 laptop, with hyper low speed and
        bandwidth internet, in order to try and save some battery in case the
        power does not come back on, which is the reality on most of our
        planets surface.
       
        adamzwasserman wrote 12 hours 8 min ago:
        TFA lists maintainability as a benefit of bloat ("modularity,
        extensibility, code patterns make it easier to maintain"). Completely
        ignores how bloat harms maintainability by making code unknowable.
        
        Stack enough layers - framework on library on abstraction on dependency
        - and nobody understands what the system does anymore. Can't hold it in
        your head. Debugging becomes archaeology through 17 layers of
        indirection. Features work. Nobody knows why. Nobody dares touch them.
        
        TFA touches this when discussing complexity ("people don't understand
        how the entire system works"). But treats it as a separate issue. It's
        not. Bloat creates unknowable systems. Unknowable systems are
        unmaintainable by definition.
        
        The "developer time is more valuable than CPU cycles" argument falls
        apart here. You're not saving time. You're moving the cost. The hours
        you "saved" pulling in that framework? You pay them back with interest
        every time someone debugs a problem spanning six layers of abstraction
        they don't understand
       
          senfiaj wrote 10 hours 7 min ago:
          I mean there are different kinds of bloats. Some is justifiable, some
          is not and some is just a symptom of other problems (the last 2 are
          not mutually exclusive), like mismanagement, incompetence (from
          management, developers, team leads, etc). This is somewhat similar to
          cholesterol, there are different types of cholesterol, some might be
          really bad, some might be harmless, etc.
          
          Bloat (you mean here code duplication?) can be both cause or a
          symptom of some maintainability problem. It's like a vicious cycle. A
          spaghetti code mess (not the same thing as bloat) will be prone to
          future bloat because developers don't know what they are doing. I
          mean in the bad sense. You can still be not familiar with the entire
          system but if the code is well organized, is reusable, modular,
          testable, you can still work relatively comfortably with such code
          and have little worries of introducing horrible regressions (in a
          case of a spaghetti code). You can also do refactors much easier.
          Meanwhile, a badly managed spaghetti code is much less testable,
          reusable, when developers work with such code, they often don't want
          to reuse an existing code, because the existing code is already
          fragile and not reusable. For each feature they prefer to create or
          duplicate a new function.
          
          This is a vicious cycle, the code is starting to rot, becoming more
          and more unmaintainable, duplicated, fragile, and, very likely,
          inefficient. This is what I meant.
       
          locknitpicker wrote 10 hours 15 min ago:
          > Stack enough layers - framework on library on abstraction on
          dependency - and nobody understands what the system does anymore.
          
          This is specious reasoning, as "optimized" implementations typically
          resort to performance hacks that make code completely unreadable.
          
          > TFA touches this when discussing complexity ("people don't
          understand how the entire system works"). But treats it as a separate
          issue. It's not. Bloat creates unknowable systems.
          
          I think you're confusing things. Bloat and lack of a clear software
          architecture are not the same thing. Your run-of-the-mill app
          developed around a low-level GUI framework like win32 API tends to be
          far more convoluted and worse to maintain than equivalent apps built
          around high-level frameworks, including electron apps. If you develop
          an app into a big ball of mud, you will have a bad time figuring it
          out regardless of what framework you're using (or not using)
       
            gwbas1c wrote 8 hours 24 min ago:
            > This is specious reasoning, as "optimized" implementations
            typically resort to performance hacks that make code completely
            unreadable.
            
            That really depends on context, and you're generalizing based on
            assumptions that don't hold true:
            
            Replacing bloated ORM code with hand-written SQL can be
            significantly more readable if it boils down to a simple query that
            returns rows that neatly map to objects. It could also boil down to
            a very complicated, hard to follow query that requires gymnastics
            to populate an object graph.
            
            The same can be said for optimizing CPU usage. It might be a case
            of removing unneeded complexity, or it could be a case of
            microoptimizations that require unrolling loops and copy & paste
            code.
            
            ---
            
            I should point out that I've lived the ORM issue: I removed an ORM
            from a product and it became industry-leading for performance, and
            the code was so clean that newcomers would compliment me on how
            easy it was to understand data access. In contrast, the current
            product that I work on is a clear example of when an ORM is
            justified.
            
            I've also lived the CPU usage issue: I had to refactor code that
            was putting numeric timestamps into strings, and then had
            complicated code that would parse the strings to perform math on
            the timestamps. The refactor involved replacing the strings with a
            defined type. Not only was it faster, the code was easier to follow
            because the timestamps were well encapsulated.
       
            adamzwasserman wrote 8 hours 58 min ago:
            I'm not advocating for unreadable optimization hacks. I'm working
            within TFA's own framework.
            TFA argues that certain bloat (frameworks, layers, abstractions) is
            justified because it improves maintainability through "modularity,
            extensibility, code patterns."
            
            I'm saying: those same layers create a different maintainability
            problem that TFA ignores. When you stack framework on library on
            abstraction, you create systems nobody can hold in their head.
            That's a real cost.
            
            You can have clean architecture and still hit this problem. A
            well-designed 17-layer system is still 17 layers of indirection
            between "user clicks button" and "database updates.
       
          frisbee6152 wrote 11 hours 51 min ago:
          A well-optimized program is often a consequence of a deep
          understanding of the problem domain, good scoping, and mindfulness.
          
          It often feels to me like we’ve gone far down the framework road,
          and frameworks create leaky abstractions. I think frameworks are
          often understood as saving time, simplifying, and offloading
          complexity. But they come with a commitment to align your program to
          the framework’s abstractions. That is a complicated commitment to
          make, with deep implications, that is hard to unwind.
          
          Many frameworks can be made to solve any problem, which makes things
          worse. It invites the “when all you’ve got is a hammer,
          everything looks like a nail” mentality. The quickest route to a
          solution is no longer the straight path, but to make the appropriate
          incantations to direct the framework toward that solution, which
          necessarily becomes more abstract, more complex, and less efficient.
       
            ElevenLathe wrote 9 hours 32 min ago:
            The main point of the framework is to keep developers
            interchangeable, and therefore suppress wages. All mature
            industries have things like this: practices that aren't "optimal"
            (in a very narrow sense), but being standardized means that through
            competition and economies of scale they are still cheaper than the
            alternative, better-in-theory solution.
       
            adamzwasserman wrote 11 hours 27 min ago:
            I completely agree. This is the point I make here:
            
  HTML      [1]: https://hackernoon.com/framework-or-language-get-off-my-la...
       
        JohnFen wrote 12 hours 15 min ago:
        Bloat is always bad.
        
        That said, all of engineering is a tradeoff, and tradeoffs mean
        accepting some amount of bad in exchange for some amount of good.
        
        In these times, though, companies seem to be very willing to accept
        bloat for marginal or nonexistent returns, and this is one of the
        reasons why, in my opinion, so much of the software being released
        these days is poor.
       
        karmakaze wrote 12 hours 37 min ago:
        No software bloat isn't ok. Tech debt can be ok, and software bloat can
        be the consequence of tech debt taken on with eyes open. But to say
        that (some) software bloat without considerations is ok is how we have
        the fastest machines imaginable and still end up with UI that can't
        keep up visually with keystrokes.
        
        Quoting Knuth without the entire context is also contributing to bloat.
        
        > Programmers waste enormous amounts of time thinking about, or
        worrying about, the speed of noncritical parts of their programs, and
        these attempts at efficiency actually have a strong negative impact
        when debugging and maintenance are considered.
        
        > We should forget about small efficiencies, say about 97% of the time:
        premature optimization is the root of all evil. Yet we should not pass
        up our opportunities in that critical 3%.
       
          locknitpicker wrote 10 hours 24 min ago:
          > But to say that (some) software bloat without considerations is ok
          is how we have the fastest machines imaginable and still end up with
          UI that can't keep up visually with keystrokes.
          
          Is this actually a problem, though? The blog features a section on
          tradeoffs, and dedicates an entire section to engineering tradeoffs.
          Perceived performance is one of these tradeoffs.
          
          You complain about UI not keeping up with key strokes. As a
          counterexample I point out Visual Studio Code. It's UI is not as
          snappy as native GUI frameworks, but we have a top notch user
          experience that's consistent across operating systems and desktop
          environments. That's a win, isn't it? How many projects can make that
          claim?
          
          The blog post also has a section on how a significant part of the
          bloat is bad.
       
            JuniperMesos wrote 2 hours 47 min ago:
            The Visual Studio code user experience is bad - in part because it
            is an electron app that is not as snappy as native GUI frameworks,
            but also for a variety of other reasons. I do not use it
            voluntarily and resent that many coding tools I would like to use
            deliberately architect themselves as VSCode plugins rather than
            more general tools applicable to any editor.
            
            I have definitely run into issues with the UI not visually keeping
            up with keystrokes in VSCode (occasionally), and also other
            Electron apps (more often - perhaps they haven't had as much
            attention to optimization as VSCode has). For this reason alone, I
            dislike the Electron ecosystem and I am extremely interested in
            projects to develop alternative cross platform renderers.
            
            Ultimately I would like to see Electron become a dead project so I
            never have to run into some interesting or useful or mandatory
            piece of software I need to use that was written using Electron
            because it was the most obvious choice for the developer.
       
            oofbaroomf wrote 6 hours 18 min ago:
            Ok, but something like Zed is almost as snappy as native GUI
            frameworks AND has a consistent user experience. It doesn't seem
            like they are making any tradeoffs there.
       
            whilenot-dev wrote 7 hours 2 min ago:
            > Perceived performance is one of these tradeoffs.
            
            Perceived performance should never be a tradeoff, only the measured
            performance impact can be one.
            
            My iPhone SE from 2020 has input delays of up to 2s after the
            update to iOS 26 and that's just really disappointing. I wouldn't
            mind if it'd be in the 0,3ms range, even though that would still be
            terrible from a measured performance POV.
       
            grayhatter wrote 9 hours 36 min ago:
            > You complain about UI not keeping up with key strokes. As a
            counterexample I point out Visual Studio Code. It's UI is not as
            snappy as native GUI frameworks, but we have a top notch user
            experience that's consistent across operating systems and desktop
            environments. That's a win, isn't it? How many projects can make
            that claim?
            
            Is it a win? Why? Consistency across platforms is a branding,
            business goal, not an engineering one. Consistency itself doesn't
            specify a direction, it just makes it more familiar, and easier to
            adopt without effort. It's easier to sit all day, and never
            exercise.
            
            It's what everybody does, or it's what everybody uses, has never
            translated into it being good.
            
            Notably; the engineers I respect the most, and the ones making
            things that I enjoy using, none of them use vscode. I'm sure most
            will read this as an attack against their editor of choice, SHUN
            THE NON BELIEVER! But hopefully enough will realize that it's not
            actually an attack on them nor their editor, but instead I'm
            advocating for what is the best possible option, and not the
            easiest to use. Could they use vscode? Obviously yes, they could.
            They don't because the more experience you have, the easier it is
            to see that bloat get in the way.
       
              robenkleene wrote 4 hours 32 min ago:
              > Notably; the engineers I respect the most, and the ones making
              things that I enjoy using, none of them use vscode.
              
              Curious what they use?
       
            _aavaa_ wrote 9 hours 47 min ago:
            It is a problem. The engineering tradeoffs have to be done at each
            level of the stack. And as progressively more layers of the stack
            make the trade off away from speed the effects compound.
            
            Nothing about a cross-platform UI requires that it not be snappy.
            Or that Electron is the best option possible.
            
            Did VSCode do a good job with the options available? Maybe, maybe
            not. But the options is where I think we should focus.
            
            Having to trade off between two bad options means you’ve already
            lost.
       
        benrutter wrote 12 hours 59 min ago:
        Tangent, but software bloat always leaves me wondering what
        hardware/software could be if we had different engineering goals.
        
        If we worked hard to keep OS requirements to a minimum- could we be
        looking at unimaginably improved battery life? Hyper reliable
        technology that lasts many years? Significantly more affordable
        hardware?
        
        We know that software bloat wastes RAM and CPU, but we can't know what
        alternatives we could have if he hadn't spent our metaphorical budget
        on bloat already.
       
          1vuio0pswjnm7 wrote 11 hours 52 min ago:
          "If we worked hard to keep OS requirements to a minimum- could we be
          looking at unimaginably improved battery life? Hyper reliable
          technology that lasts many years? Significantly more affordable
          hardware?"
          
          Volunteer-supported UNIX-like OS, e.g., NetBSD, represents the
          closest to this ideal for me
          
          I am able to use an "old" operating system with new hardware.  No
          forced "upgrades" or remotely-installed "updates".  I decide when I
          want to upgrade software.  No new software is pre-installed
          
          This allows me to observe and enjoy the speed gains from upgrading
          hardware in a way I cannot with a corporate operating system.  The
          later will usurp new hardware resources in large part for its own
          commercial purposes.  It has business goals that may conflict with
          the non-commercial interests of the computer owner
          
          It would be nice if software did not always grow in size.  It happens
          to even the simplest of programs. Look at the growth of NetBSD's init
          over time for example
          
          Why not shrink programs instead of growing them
          
          Programmers who remove code may be the "heros", as McIlroy once
          suggested ("The hero is the negative coder")
       
          Retric wrote 12 hours 2 min ago:
          Screens and radios do a lot to limit battery life on most modern
          devices even if the energy use running the OS and user software was
          free.
          
          If with a reasonable battery standby mode can only last a few weeks
          and active use is at best a few days then you might as well add a
          fairly beefy CPU and with a beefy CPU OS optimizations only go so
          far.  This is why eInk devices can end up with such a noticeably
          longer lifespan, they now have a reason to put in a weak CPU and do
          some optimization because the possibility of a long lifespan is a
          huge potential selling point.
       
        everyone wrote 15 hours 44 min ago:
        I'm curious, can you vibe code assembly? Perhaps the AI's havent been
        trained on much of it.. But in theory you could get amazing performance
        without all the work?  .. The only downside is you would have zero idea
        how the code works and it would be entirely unmaintainable.
       
          nineteen999 wrote 14 hours 34 min ago:
          I'm doing a bit with Z80/6502/x86_64 assembly with it, and a little C
          with sdcc and gcc. It's more fun than I thought it would be.
       
        pjmlp wrote 15 hours 51 min ago:
        Some maybe, however we have moved away beyond it being ok.
       
        eviks wrote 15 hours 52 min ago:
        > A significant part of this bloat is actually a tradeoff
        
        Or actually not, and the list doesn't help go beyond "users have more
        resources, so it's just easier to waste more resources"
        
        > Layers & frameworks
        
        There are a million of these, with performance difference of orders of
        magnitude. So an empty reference explains nothing re bloat
        
        But also
        
        > localization, input, vector icons, theming, high-DPI
        
        It's not bloat if it allows users to read text in an app! Or read one
        that's not blurry! Or one that doesn't "burn his eyes"
        
        > Robustness & error handling / reporting.
        
        Same thing, are you talking about a washing machine sending gigabytes
        of data per day for no improvement whatsoever "in robustness"? Or are
        you taking about some virtualized development environment with perfect
        time travel/reproduction, where whatever hardware "bloat" is needed
        wouldn't even affect the user? What is the actual difference between
        error handling in the past besides easy sending of your crash dumps?
        
        > Engineering trade-offs. We accept a larger baseline to ship faster,
        safer code across many devices.
        
        But we do not do that! The code is too often slower precisely because
        people have a ready list of empty statements like this
        
        > Hardware grew ~three orders of magnitude. Developer time is often
        more valuable than RAM or CPU cycles
        
        What about the value of time/resources of your users ? Why ignore
        reality outside of this simplistic dichotomy.
        Or will the devs not even see the suffering because the "robust error
        handling and reporting" is nothing of the sort, it mostly /dev/nulls a
        lot of user experience?
       
        vbezhenar wrote 15 hours 56 min ago:
        Sometimes bloat is forced on you.
        
        I had to write Android app recently. I don't like bloat. So I disabled
        all libraries. Well, I did it, but I was jumping over many hoops.
        Android Development presumes that you're using appcompat libraries and
        some others. In the end my APK was 30 KB and worked on every smartphone
        I was interested (from Android 8 to Android 16). Android Studio Hello
        World APK is about 2 MB, if I remember correctly. This is truly
        madness.
       
          immibis wrote 6 hours 32 min ago:
          I used to release an Android app 10ish years ago (before I was banned
          by Google from releasing Android apps) and this was the case back
          then. Every time I opened Eclipse it would automatically add the 6MB
          "support library" to my otherwise 50kB app. There was no way to turn
          this off, but if you removed it, it would stay removed for that
          session. Usually I only noticed I'd forgotten when I built the final
          .apk and went to upload it to the App Store.
       
          pjmlp wrote 15 hours 46 min ago:
          The reason being that Android used to advocate how great it was
          versus J2ME fragmentation, a marketing move that only impressed those
          without experience, turns out that a great deal of appcompat code is
          actually to deal with Android fragmentation across OEMs and devices.
       
        aaaashley wrote 16 hours 2 min ago:
        The "development time is more important than performance" motto treats
        bad performance as the problem with software, when in reality poor
        performance is a symptom of growing software complexity. I'm sure that
        each individual coder who has contributed to software bloat believed
        that their addition was reasonable. I'm sure everyone who has advocated
        for microservices fully believes they are solving a real-world problem.
        The issue is that we don't treat complexity as a problem
        in-and-of-itself.
        
        In physical disciplines, like mechanical engineering, civil
        engineering, or even industrial design, there is a natural push towards
        simplicity. Each new revision is slimmer & more unified–more
        beautiful because it gets closer to being a perfect object that does
        exactly what it needs to do, and nothing extra. But in software,
        possibly because it's difficult to see into a computer, we don't have
        the drive for simplicity. Each new LLVM binary is bigger than the last,
        each new HTML spec longer, each new JavaScript framework more abstract,
        each new Windows revision more bloated.
        
        The result is that it's hard to do basic things. It's hard to draw to
        the screen manually because the graphics standards have grown so
        complicated & splintered. So you build a web app, but it's hard to do
        that from scratch because the pure JS DOM APIs aren't designed for app
        design. So you adopt a framework, which itself is buried under years of
        cruft and legacy decisions. This is the situation in many areas of
        computer science–abstractions on top of abstractions and within
        abstractions, like some complexity fractal from hell. Yes, each layer
        fixes a problem. But all together, they create a new problem. Some
        software bloat is OK, but all software bloat is bad.
        
        Security, accessibility, and robustness are great goals, but if we want
        to build great software, we can't just tack these features on. We need
        to solve the difficult problem of fitting in these requirements without
        making the software much more complex. As engineers, we need to build a
        culture around being disciplined about simplicity. As humans, we need
        to support engineering efforts that aren't bogged down by corporate
        politics.
       
          Gravityloss wrote 13 hours 43 min ago:
          We do see the same in physical engineering too. At some point some
          products have plateaued, there's no more development. But still you
          need to sell more, the designers need to get paid and they are used
          as status symbols and so on.
          
          One example is skirt length. You have fashion and the only thing
          about it is change. If everybody's wearing short skirts, then longer
          skirts will need to be launched in fashion magazines and manufactured
          and sent to shops in order to sell more. The actual products have not
          functionally changed in centuries.
       
            aaaashley wrote 6 hours 45 min ago:
            I don't think that fashion trends are comparable. I think that
            fashion trends are fine in concept–things get old and we switch
            things up. It's the way the human superorganism is able to evolve
            new ideas. Unfortunately, capitalism accelerates these changes to
            an unreasonable pace, but even in Star Trek communism, people get
            bored. The cultural energy that birthed one style is no longer
            present, we always need something new that appeals to the current
            time.
            
            But clothes still have to look nice. Fashion designers have a
            motivation to make clothes that serve their purpose elegantly.
            Inelegance would be adding metal rails to a skirt so that you could
            extend its length at will. Sure, the new object has a new function,
            and its designer might feel clever, but it is uglier. But ugly
            software and beautiful software often look the same. So software
            trends end up being ugly, because no one involved had an eye for
            beauty.
       
            gwbas1c wrote 7 hours 39 min ago:
            Or people just start ignoring the trends and only replace their
            clothes when they wear out.
       
        raincole wrote 16 hours 4 min ago:
        Sometimes I feel the biggest contribution React made to the world isn't
        it speeded up frontend dev or something. It's showing how much
        discussion about performance is pure paranoid.
       
        foofoo12 wrote 16 hours 5 min ago:
        Bloat is OK when the alternative is worse. But have you really done the
        analyzes on how bad and how bigger the bloat is going to get? But it's
        still justified if the alternative is worse.
       
        rubymamis wrote 16 hours 22 min ago:
        I want to benchmark how much battery life Electron apps drain from my
        computer compared to the equivalent native apps. Then, we can talk
        about if bloat is OK.
        
        P.S. Does someone know anyone who tested this?
       
          esafak wrote 12 hours 35 min ago:
          You've used Electron and you need a benchmark to tell you if it is
          bloated?
       
            rubymamis wrote 10 hours 22 min ago:
            I know it's bloated! I want to prove that so the evidence is clear
            on how much it affects people's day-to-day use.
       
        usrbinenv wrote 16 hours 33 min ago:
        If I could make one law related to software, it would be to ban React
        and React-like frameworks (includes Vue and Svelte I believe): if you
        have to put more than one line of HTML in your JavaScript, if you have
        VDOM, if you have a build step - I want this to be illegal. It is
        entirely possible to write a js-framework that attaches itself to DOM
        from the top, without any kind of expensive hydration steps or VDOM or
        templating (I've built one). React is a horrible complex monster that
        wastes developers time, heats up users' CPUs and generally feels super
        slow and laggy. 99% percent of websites would work a lot better with
        SSR and a few lines of JavaScript here and there and there is zero
        reason to bring anything like React to the table. React is the most
        tasteless thing ever invented in software.
       
          athanagor2 wrote 15 hours 26 min ago:
          > React and React-like frameworks (includes Vue and Svelte I believe)
          
          Putting React with those two is a wild take.
          
          > 99% percent of websites would work a lot better with SSR and a few
          lines of JavaScript here and there and there is zero reason to bring
          anything like React to the table.
          
          Probably but as soon as you have a modicum of logic in your page the
          primitives of the web are a pain to use.
          
          Also, I must be able to build stuff in the 1% space. I actually did
          it before: I built an app that's entirely client-side, with Vue, and
          "serverless" in the sense that it's distributed in the form of one
          single HTML file. Although we changed that in the last few months to
          host it on a proper server.
          
          The level of psychological trauma that some back-end devs seem to
          endure is hilarious though. Like I get it, software sucks and it's
          sad but no need to be dramatic about it.
          
          And btw, re forbidding stuff: no library, no process, no method can
          ever substitute to actually knowing what you're doing.
       
            usrbinenv wrote 15 hours 14 min ago:
            You can do very complex stuff without any need for React like
            approach. I literally said I've written a very sophisticated
            framework that was exceptionally snappy - that's what should be
            used for that 1% (not my framework, but the approach). Even better,
            I could introduce it very gradually and strategically on different
            SSR pages and then (if I wanted to) I could turn the whole app into
            an SPA - all without needing to "render" anything with JavaScript,
            VDOM or other such nonsense.
       
              brazukadev wrote 4 hours 53 min ago:
              Is this framework public? I think the same way as you and
              developed my own framework with/out the things you mentioned and
              would like to compare my approach with yours. Biggest difference
              is that I use lit-html for templating, which is quite efficient.
       
                usrbinenv wrote 4 hours 32 min ago:
                Kind of, but not really. I'll make it public soon when I have
                the documentation ready.
       
          graemep wrote 15 hours 49 min ago:
          >  It is entirely possible to write a js-framework that attaches
          itself to DOM from the top, without any kind of expensive hydration
          steps or VDOM or templating (I've built one)
          
          Can you elaborate more on how this works? Do you mean JS loading
          server generated HTML into the DOM?
       
            usrbinenv wrote 15 hours 46 min ago:
            Server renders the page. Suppose you have a element there which
            reads like .... Then the .js framework which was loaded on that
            page queries the DOM in search of any elements with data-component
            attribute and creates instances of HenloComponent (which is a class
            written by you, the developer, user of the framework). It's a bit
            more complicated than that, but that's the essence of it.
            
            Note that with this approach you don't need to "render" anything,
            browser already done it for you. You merely attaching functionality
            to DOM elements in the form of Component instances.
       
              graemep wrote 15 hours 35 min ago:
              Yes, that is what I was asking about.
              
              I entirely agree. It is what I do when I have to - although I
              mostly do simple JS as I am a backend developer really, and if I
              do any front end its "HTML plus a bit of JS" and I just write JS
              loading stuff into divs by ID.
              
              When i have worked with front end developers doing stuff in react
              it has been a horrible experience.  In the very worst case they
              used next.js to write a second backend that sat between my
              existing Django backend (which had been done earlier) and the
              front end. Great for latency! It was an extreme example but it
              really soured my attitude to complex front ends. The project
              died.
       
                athanagor2 wrote 15 hours 0 min ago:
                > In the very worst case they used next.js to write a second
                backend that sat between my existing Django backend (which had
                been done earlier) and the front end.
                
                That's hilarious.
                
                Casey Muratori truly is right when he says to "non-pessimize"
                software (= make it do what it should do and not more), before
                optimizing it.
       
                usrbinenv wrote 15 hours 18 min ago:
                Oh no, they do that? I thought Next.js is a fully functional
                backend itself, like Django. But I'm shocked to learn that it's
                just a middleman-backend to render templates that are already
                served from another backend.
       
                  graemep wrote 10 hours 21 min ago:
                  Next.js is a a fully functional backend. Its not Next's fault
                  - I dislike it for other reasons, but this was not a Next
                  problem.
                  
                  The problem was that the front end developers involved
                  decided to use Next.js to replace the front end of a mostly
                  complete Django site. I think it was very much a case of
                  someone just wanting to use what they knew regardless of
                  whether it was a good fit - the "when all you have is a
                  hammer, everything looks like a nail" effect.
       
                    brazukadev wrote 4 hours 44 min ago:
                    It is Vercel's "fault" because it sold Next.JS as a BFF
                    (Backend-for-frontend), a concept that didn't exist but
                    helped them sell lots of hosting for React, something
                    absolutely unexpected to every old school React developer.
       
          anon-3988 wrote 16 hours 28 min ago:
          Why not ditch HTML itself? People are already downloading binary
          blobs on a daily basis anyway, just download some binary once,
          execute them in some isolated environment. So you only have to
          transmit data.
       
            graemep wrote 15 hours 55 min ago:
            Do not know to what extent you are serious, but I think the idea is
            that content should be HTML and apps should be just JS.
            
            We could go further and have a language written to run in a sandbox
            VM especially for that with a GUI library designed for the task
            instead of being derived from a document format.
       
              usrbinenv wrote 15 hours 50 min ago:
              Yeah, I think my point was misunderstood: part of what I'm
              opposed to was writing HTML (or worse, CSS) inside .js files.
       
                graemep wrote 10 hours 11 min ago:
                Yes, I think you were misunderstood.
                
                I think HTML inside JS is a code smell. I cannot even imagine
                why you would need CSS inside JS.
       
                  brazukadev wrote 4 hours 42 min ago:
                  lit-html templating inside JS is a code smell?
       
            usrbinenv wrote 16 hours 17 min ago:
            I don't see a problem with HTML. It's easy to learn, easy to use
            and a very nice format for web. CSS is also great. JavaScript is
            pretty great too. My point is that modern web is horrible because
            people with no taste and no understanding of the underlying
            technologies turned it into a horrible shitshow by inventing
            frameworks that turn the web upside down and making a bloat out of
            it. I don't hate many things in life, but this one - with passion,
            because every time I visit a website, I can just feel it's made
            with React because of how slow and laggy it is.
       
            WesolyKubeczek wrote 16 hours 20 min ago:
            Ehrm, have you seen how fancy UI stuff is being implemented in
            so-called "native apps" these days? Anything more complicated than
            a button or label or other elements familiar since 1993 gets shoved
            into a webview and rendered with HTML and CSS.
       
        rossant wrote 16 hours 45 min ago:
        > There are still highly demanded optimized programs or parts of such
        programs which won't disappear any time soon. Here is a small fraction
        of such software:
        > ...
        > Such software will always exist, it just moved to some niche or
        became a lower level "backbone" of other higher level software.
        
        Yes. I’ve been working for years on building a GPU-based scientific
        visualization library entirely in C, [1] carefully minimizing heap
        allocations, optimizing tight loops and data structures, shaving off
        bytes of memory and microseconds of runtime wherever possible.
        Meanwhile, everyone else seems content with Electron-style bloat
        weighing hundreds of megabytes, with multi-second lags and 5-FPS
        interfaces. Sometimes I wonder if I’m just a relic from another era.
        But comments like this remind me that I’m simply working in a niche
        where these optimizations still matter.
        
  HTML  [1]: https://datoviz.org/
       
          ajoseps wrote 2 hours 28 min ago:
          i’m not familiar with this space but I do remember using plotly
          with webgl to create interactive graphs when they had too many data
          points (financial tick data). I imagine this is quite a step up and
          the project looks really cool! I hope you continue working on it.
       
          scott_w wrote 11 hours 57 min ago:
          You're always going to be up against "good enough" and inertia. For
          plenty of applications, a bloated Electron app really is good enough!
          
          The library you built looks fucking awesome, by the way. However, I
          think even you acknowledged on the page that Matplotlib may well be
          good enough for many use cases. If someone knows an existing tool
          extremely well, any replacement needs to be a major step change to
          solve a problem that couldn't be solved in existing, inefficient,
          tools.
       
            rossant wrote 11 hours 24 min ago:
            Thanks. The use cases for my library are pretty clear: anytime
            Matplotlib is way too slow or just crashes under the weight of too
            much data (and 3D).
       
          sgarland wrote 13 hours 34 min ago:
          Please keep doing what you’re doing; I appreciate the effort and
          mentality.
       
            rossant wrote 11 hours 26 min ago:
            Thanks!
       
        liampulles wrote 16 hours 46 min ago:
        The performance of a web service is largely dependant on how many calls
        and how quickly it can make calls to external services. If a given
        system is making 2 DB calls when it could instead make one, then that
        should be the initial focus for optimisation.
        
        Indeed, if a language and framework has slow code execution, but
        facilitates efficient querying, then it can still perform relatively
        well.
       
        SurceBeats wrote 16 hours 56 min ago:
        The article is kind of right about legitimate bloat, but "premature
        optimization is evil" has become an excuse to stop thinking about
        efficiency entirely. When we choose Electron for a simple app or pull
        in 200 dependencies for basic tasks, we're not being pragmatic, we're
        creating complexity debt that often takes more time to debug than
        writing leaner code would have. But somehow here we are, so...
       
          sublinear wrote 15 hours 6 min ago:
          On the flip side, what you're saying is also an overused excuse to
          dismiss web apps and promote something else that's probably a lot
          worse for everyone.
          
          I've never seen a real world Electron app with a large userbase that
          actually has that many dependencies or performance issues that would
          be resolved by writing it as a native app. It's baffling to me how
          many developers don't realize how much latency is added and memory is
          used by requiring many concurrent HTTP requests. If you have a
          counterexample I'd love to see it.
       
          m-schuetz wrote 15 hours 58 min ago:
          I'd argue that the insane complexity of fast apps/APIs pushes many
          devs towards super slow but easy apps/APIs. There needs to be a
          middle ground, something that's easy to use and fast-enough, rather
          than trying to squeeze every last bit of perf while completely
          sacrificing usability.
       
            immibis wrote 6 hours 35 min ago:
            Java Swing? It was slow in 1999, which means it's fast now. It's
            also a much more sensible language than JavaScript. It's not native
            GUI, but neither is JavaScript anyway.
       
          empiko wrote 16 hours 19 min ago:
          What is often missing from the discussion is the expected lifecycle
          of the product. Using Electron for a simple app might be a good idea,
          if it is a proof-of-concept, or an app that will be used sparsely by
          few people. But if you use it for the built-in calculator in your OS,
          the trade-offs are suddenly completely different.
       
            pjmlp wrote 15 hours 50 min ago:
            A large majority of Electron crap could be turned into a regular
            website, but then the developers would need to actually target the
            Web, instead of ChromeOS Platform and that is too hard apparently.
       
              Incipient wrote 14 hours 15 min ago:
              I've recently gone back to more in depth (but still indie) Web
              dev with vuejs and quasar, and honestly I don't even find myself
              thinking about "targeting Web" any more - I just write code and
              it seems to work on pretty much everything (I haven't tested
              safari to be fair).
       
                jtbaker wrote 12 hours 20 min ago:
                Vue is so good! I've been encouraged seeing more organizations
                mentioning using it (in the hiring thread etc.) lately.
       
          eitau_1 wrote 16 hours 22 min ago:
          The sad reality is that easy tech explores solution space faster
       
          nly wrote 16 hours 27 min ago:
          Fortunately many apps seem to be moving to native webviews now
          instead of shipping electron
       
          0xEF wrote 16 hours 48 min ago:
          Thinking is hard, so any product that gives people an excuse to stop
          doing it will do quite well, even if it creates more inconveniences
          like framework bloat or dependency rot. This is why shoehorning AI
          into everything is so wildly successful; it gives people the okay to
          stop thinking.
       
          rossant wrote 16 hours 52 min ago:
          Yes. Too many people seem to forget the word "premature." This quote
          has been grossly misused to justify the most egregious cases of bloat
          and unoptimized software.
       
            SurceBeats wrote 16 hours 48 min ago:
            Yeah, somehow it went from don't micro optimize loops to 500MB
            Electron apps are just fine actually hahaha
       
              bluetomcat wrote 16 hours 7 min ago:
              And consequently, "you need 32GB of RAM just to be future-proof
              for the next 3 years".
       
              stared wrote 16 hours 20 min ago:
              I hope Tauri gets some traction ( [1] ).
              The single biggest benefit it drastically smaller build size (
              [2] ).
              
              A 500MB Electron app can be easily a 20MB Tauri app.
              
  HTML        [1]: https://v2.tauri.app/
  HTML        [2]: https://www.levminer.com/blog/tauri-vs-electron
       
                brabel wrote 15 hours 39 min ago:
                Not sure. Tauri apps run on the browser and browsers are
                absolute memory horders. At any time my browser is by far the
                biggest culprit of abusing available memory. Just look at all
                the processes it starts, it’s insane and I’ve tried all
                popular browsers, they are all memory hogs.
       
                  Redster wrote 11 hours 56 min ago:
                  Based on [1] , it seems that Tauri uses native webviews,
                  which allows Tauri apps to be much smaller and less of a
                  memory hog than a tool which uses Electron and runs a whole
                  browser.
                  
  HTML            [1]: https://v2.tauri.app/concept/architecture/
       
                  dspillett wrote 12 hours 5 min ago:
                  A big complaint with Electron that Tauri does avoid is that
                  you package a specific browser with your app, ballooning the
                  installer for every Electron app by the size of Chromium. The
                  same with bundling NodeJS (or the equivalent backend for
                  Tauri), but that isn't quite as weighty and the difference is
                  which backend not whether it is there at all or not.
                  
                  In either case you end up with a fresh instance of the
                  browser (unless things have changed in Tauri since last time
                  I looked), distinct from the one serving you generally as an
                  actual browser, so both do have the same memory footprint in
                  that respect. So you are right, that is an issue for both
                  options, but IME people away from development seem more
                  troubled by the package size than interactive RAM use. Tauri
                  apps are likely to start faster from cold as it is loading a
                  complete new browser for which every last byte used needs to
                  be rad from disk, I think the average non-dev user will be
                  more concerned about that than memory use.
                  
                  There have been a couple of projects trying to be Electron,
                  complete with NodeJS, but using the user's currently
                  installed & default browser like Tauri, and some other that
                  replace the back-end with something lighter-weight, even more
                  like Tauri, but most of them are currently unmaintained,
                  still officially alpha, or otherwise
                  incomplete/unstable/both. Electron has the properties of
                  being here, being stable/maintained, and being good enough
                  until it isn't (and once it isn't, those moving off it tend
                  to go for something else completely rather than another
                  system very like it) - it is difficult for a newer similar
                  projects to compete with the momentum it has when the
                  “escape route” from it is generally to something more
                  completely different.
       
                  peterfirefly wrote 12 hours 5 min ago:
                  Electron apps also run in a browser.  They package an entire
                  browser as part of the app.
       
              cpt_sobel wrote 16 hours 32 min ago:
              The latest MS Teams update on MacOS fetched an installer that
              asked me for 1.2GB (Yes, G!) of disk space...
       
                xigoi wrote 9 hours 15 min ago:
                I recently found out that Teams was taking up over 5 GB on my
                laptop. The incompetence of Microsoft developers knows no
                bounds.
       
                thaumasiotes wrote 13 hours 7 min ago:
                I recently set up a new virtual machine with xubuntu when I
                stopped being able to open my virtualbox image.
                
                Turns out modern ubuntu will only install Firefox as a snap.
                And snap will then automatically grow to fill your entire hard
                drive for no good reason.
                
                I'm not quite sure how people decided this was an approach to
                package management that made sense.
       
        Cthulhu_ wrote 17 hours 8 min ago:
        I work in enterprise, B2C software, lots of single page apps / self
        service portals, that kind of thing. I don't think our software is
        bloated per se; sure it could be faster, but for example when it comes
        to mobile apps, it's a tradeoff between having the best and fastest
        apps, being able to hire people, development speed, and non-functional
        requirements.
        
        It's good enough, and for example React Native is spending years and
        millions in more optimizations to make their good enough faster, the
        work they do is well beyond my pay grade. ( [1] )
        
  HTML  [1]: https://reactnative.dev/blog/2025/10/08/react-native-0.82#expe...
       
          liampulles wrote 16 hours 41 min ago:
          As far as internal services go, I agree, being able to easily add
          stuff is the main priority.
          
          For customer facing stuff, I think it's worth looking into frameworks
          that do backend templating and then doing light DOM manipulation to
          add dynamism on the client side. Frameworks like Phoenix make this
          very ergonomic.
          
          It's a useful tool to have in the belt.
       
          bombcar wrote 16 hours 46 min ago:
          95% of portals could be done with 2000s tech (since they're basically
          CRUD apps) - the question is what it is worth to make them that way.
          
          And the answer is almost always "nothing" because "good enough" is
          fine.
          
          People like to shit on development tools like Electron, but the
          reality is that if the app is shitty on Electron, it'd probably be
          just as shitty on native code, because it is possible to write good
          Electron apps.
       
            bitwize wrote 6 hours 35 min ago:
            When I think of good Electron apps, the first name that pops to my
            mind is Visual Studio Code. They really did a lot of work to make
            the editor responsive and capable of sophisticated things without
            blowing your RAM/disk budget.
            
            But it's still bloated compared to the editor I use, Emacs.
            
            And it's still bloated compared to a Java-based IDE of equivalent
            functionality. (Eclipse and IntelliJ can do much more OOTB than VS
            Code can.)
       
            eviks wrote 15 hours 44 min ago:
            > but the reality is that if the app is shitty on Electron, it'd
            probably be just as shitty on native code
            
            Right off the bat it'll save hundreds of MB in app size with a
            noticeable startup time drop , so no, it won't be just as shitty.
            
            > because it is possible to write good Electron apps.
            
            The relevant issue is the difficulty in doing that, not the mere
            possibility.
       
        Grumbledour wrote 17 hours 13 min ago:
        The question is of course always where someone draws the line, and
        thats part of the problem.
        
        Too many people have the "Premature optimization is the root of all
        evil" quote internalized to a degree they won't even think about any
        criticisms or suggestions.
        
        And while they might be right concerning small stuff, this often piles
        up and in the end, because you choose several times not to optimize,
        your technology choices and architecture decisions add up to a bloated
        mess anyway that can't be salvaged.
        
        Like, when you choose a web framework for a desktop app, install size,
        memory footprint, slower performance etc. might not matter looked at
        individually, but in the end it all might easily add up and your
        solution might just suck without much benefit to you. Pragmatism seems
        to be the hardest to learn for most developers and so many solutions
        get blown out of proportion instantly.
       
          gwbas1c wrote 7 hours 41 min ago:
          What I once said to a less experienced developer in a code review is:
          
          > Don't write stupid slow code
          
          The context was that they wrote a double-lookup in a dictionary, and
          I was encouraging them to get into the habit of only doing a single
          lookup.
          
          Naively, one could argue that I was proposing a premature
          optimization; but the point was that we should develop habits where
          we choose the more efficient route when it adds no cost to our
          workflow and keeps code just as readable.
       
          ndiddy wrote 13 hours 0 min ago:
          > Too many people have the "Premature optimization is the root of all
          evil" quote internalized to a degree they won't even think about any
          criticisms or suggestions.
          
          Yeah I find it frustrating how many people interpret that quote as
          "don't bother optimizing your software". Here's the quote in context
          from the paper it comes from:
          
          > Programmers waste enormous amounts of time thinking about, or
          worrying about, the speed of noncritical parts of their programs, and
          these attempts at efficiency actually have a strong negative impact
          when debugging and maintenance are considered. We should forget about
          small efficiencies, say about 97% of the time: premature optimization
          is the root of all evil.
          
          > Yet we should not pass up our opportunities in that critical 3 %. A
          good programmer will not be lulled into complacency by such
          reasoning, he will be wise to look carefully at the critical code;
          but only after that code has been identified.
          
          Knuth isn't saying "don't bother optimizing", he's saying "don't
          bother optimizing before you profile your code". These are two very
          different points.
       
            WBrentWilliams wrote 12 hours 8 min ago:
            I'm old.
            
            My boss (and mentor) from 25 years ago told me to think of the
            problems I was solving with a 3-step path:
            
            1. Get a solution working
            
            2. Make the solution correct
            
            3. Make the solution efficient
            
            Most importantly, he emphasizes that the work must be done in that
            order. I've taken that everywhere with me.
            
            I think one of the problems is that quite often, due to business
            pressure to ship, step 3 is simply skipped. Often, software is
            shipped half-way through step 2 -- software that is at best
            partially correct.
            
            The pushes the problem down to the user, who might be building a
            system around the shipped code. This compounds the problem of
            software bloat, as all the gaps have to be bridged.
       
          sgarland wrote 13 hours 25 min ago:
          It is forever baffling to me that so many devs don’t seem to
          appreciate that small performance issues compound, especially when
          they’re in a hot path, and have dependent calls.
          
          Databases in particular, since that’s my job. “This query runs in
          2 msec, it’s fast enough.” OK, but it gets called 10x per flow
          because the ORM is absurdly stupid; if you cut it down by 500
          microseconds, you’d save 5 msec. Or if you’d make the ORM behave,
          you could save 18 msec, plus the RTT for each query you neglected to
          account for.
       
          arethuza wrote 16 hours 50 min ago:
          I've never interpreted "Premature optimization..." to mean don't
          think about performance, just that you don't have to actually
          implement mechanisms to increase performance until you actually have
          requirements to do so - you should always ask of a design "how could
          I make this perform better if I had to".
       
            aleph_minus_one wrote 16 hours 44 min ago:
            To me, it rather meant: "Ultrahard" optimization is perfectly fine
            and a good idea, but not before it has become clear that the
            requirements won't change anymore (because highly optimized code is
            very often much harder to change to include additional
            requirements).
            
            Any different interpretation in my opinion leads to slow,
            overbloated software.
       
              arethuza wrote 16 hours 33 min ago:
              Yeah - I've heard that described as "It's easier to make working
              things fast than fast thing work" - or something like that.
       
          AlotOfReading wrote 16 hours 55 min ago:
          I've found that mentioning bloat is the fastest way to turn a
          technical conversation hostile.
          
          Do we need a dozen components of half a million lines each maintained
          by a separate team for the hotdesk reservation page? I'm not sure,
          but I'm  definitely not willing to  endure the conversation that
          would follow from asking.
       
        InMice wrote 17 hours 28 min ago:
        When I click on some of stuff on the page Im getting redirected to
        spammy opera download pages
       
          binaryturtle wrote 17 hours 2 min ago:
          I guess that's that acceptable software bloat the site probably is
          talking about. :)  (I have not clicked. I first read comments to find
          out if it's worth clicking.)
       
            InMice wrote 16 hours 3 min ago:
            Lol. It's the behavior I see when there's a malicious chrome plugin
            installed. A link on a page loads a spam site randomly in a new
            tab, but the links works normal after that. Im pretty sure it's
            none of my plugins I guess.
       
       
   DIR <- back to front page