URI: 
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   Tags to make HTML work like you expect
       
       
        est wrote 13 hours 21 min ago:
        the lang="en" always irritates me.
        
        What if the page has mixed language content?
        
        e.g. on the /r/france/ reddit. The page says lang="en" because every
        subreddit shares the same template. But actual content were generated
        by French speaking users.
       
          Telaneo wrote 13 hours 12 min ago:
          lang="" if you don't know what language your page will be in. , and
          then  on whatever other language content. Content from users that
          aren't tagged to be in a specific language doesn't really fit into
          this system though.
          
  HTML    [1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
       
            est wrote 12 hours 41 min ago:
            That sounds good in theory. On bsky.social you are supposed to
            choose a lang before posting.
            
            But again there's mixed language issue
            
            Or do users even bother to choose the correct lang?
       
          acdha wrote 13 hours 12 min ago:
          This is one of the great parts of the web: you can tag every element
          with the global lang attribute and have things work the way you
          expect.
          
          For example, you can have CSS generate the appropriate quotation
          marks even in nested contexts so you can painlessly use  tags to
          markup scholarly articles even if the site itself is translated and
          thus would have different nested quotation marks for, say, the French
          version embedding an English quote including a French quote or vice
          versa.
          
          In your Reddit example, the top level page should be in the user’s
          preferred site language with individual posts or other elements using
          author’s language: …
       
          zeroq wrote 13 hours 18 min ago:
          You can add lang attributes to elements too!
          
  HTML    [1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
       
        extraduder_ire wrote 15 hours 33 min ago:
        I thought  automatically implied utf-8, or have things changed since
        html5 was the new hotness?
       
        chrisofspades wrote 16 hours 45 min ago:
        If your IDE supports Emmet (supported by VS Code out of the box) then
        you can use "!"-tab to get the same tags.
       
        chroma_zone wrote 17 hours 56 min ago:
        I usually add:
        
          
        
        which gives you a nice automatic dark theme "for free"
       
          jimniels wrote 17 hours 4 min ago:
          Ah this is a good one! I should maybe start considering this as a
          default...
       
        cluckindan wrote 19 hours 10 min ago:
        CSS rules to make styling work like you expect:
        
            *, *:before, *:after {
            box-sizing: border-box;
            }
       
        wpollock wrote 23 hours 47 min ago:
        I appreciate this post!  I was hoping you would add an inline CSS style
        sheet to take care of the broken defaults.  I only remember one off the
        top of my head, the rule for monospace font size.  You need something
        like:
        
           code, pre, tt, kbd, samp {
             font-family: monospace, monospace;
           }
        
        But I vaguely remember there are other broken CSS defaults for links,
        img tags, and other stuff.  An HTML 5 boilerplate guide should include
        that too, but I don't know of any that do.
       
          keane wrote 19 hours 16 min ago:
          Paired with H5BP you can use Normalize.css (as an alternative to a
          reset like [1] ) found at [2] There's also this short reset:
          
  HTML    [1]: http://meyerweb.com/eric/tools/css/reset/
  HTML    [2]: https://github.com/necolas/normalize.css/blob/master/normali...
  HTML    [3]: https://www.joshwcomeau.com/css/custom-css-reset/
       
        notepad0x90 wrote 1 day ago:
        I'm not a web developer, so if someone can please enlighten me: Why
        does this site, and so many "modern" sites like it have it so that the
        actual content of the site takes up only 20% of my screen?
        
        My browser window is 2560x1487. 80% of the screen is blank. I have to
        zoom in 170% to read the content. With older blogs, I don't have this
        issue, it just works. Is it on purpose or it is it bad css? Given the
        title of the post, i think this is somewhat relevant.
       
          scotty79 wrote 20 hours 55 min ago:
          Your pixels are too small. Enable system scaling for high dpi
          screens.
       
          qingcharles wrote 21 hours 14 min ago:
          This site was designed many moons ago, for another age. That's both a
          blessing and a curse, but much more of a blessing. As you've found,
          you can fix the zoom.
          
          It's rare to see a site as popular as HN which has made almost zero
          changes to the UI over its entire history.
       
          harrall wrote 22 hours 33 min ago:
          You'll notice newspapers use columns and do not extend the text all
          the way left to right either. It's a typographical consideration, for
          both function and style.
          
          From a functional standpoint: Having to scan your eyes left to right
          a far distance to read makes it more uncomfortable. Of course, you
          could debate this and I'm sure there are user preferences, but this
          is the idea behind limiting the content width.
          
          From a stylistic standpoint: It just looks “bad” if text goes all
          the way from the left to right because the paragraph looks "too thin"
          like "not enough volume" and "too much whitespace." It’s about
          achieving a ratio of background to text that’s visually
          pleasurable. With really wide widths, paragraphs can end really early
          on the left, leaving their last lines really “naked” where you
          see all this whitespace inconsistently following some paragraphs. I
          can't really explain why this looks bad any further though. It’s
          kind of like picking colors combinations, the deciding factor isn't
          any rule: it's just "does it look pretty?"
          
          In the case of the site in question, the content width is really
          small. However, if you notice, each paragraph has very few words so
          it may have been tightened up for style reasons. I would have made
          the same choice.
          
          That said, if you have to zoom in 170% to read the content and
          everything else is not also tiny on your screen, it may be bad CSS.
       
            notepad0x90 wrote 22 hours 2 min ago:
            Newspapers have less than 5% margin for whitespace. they're smart
            enough to have multiple columns. It's also a side-effect of how
            every line costs money and they need to cramp as much content as
            possible in one page.
            
            I get not having read all the way to the end and back, I even get
            having margins, but it should be relative to the screen size. Fixed
            width is the issue I think. To avoid paragraphs looking too thin,
            maybe increasing the font relative to the screen size makes sense?
            I'd think there is a way to specify a reference screen resolution
            to the browser so that it can auto increase the font sizes and/or
            adjust the div's width.
       
              wonger_ wrote 4 hours 10 min ago:
              For font size that increases with screen size, you can use some
              clamp() math, like:
              
                --step-0: clamp(1.125rem, 1.0739rem + 0.2273vw, 1.25rem);
              
              Taken from
              
       [h|       [1]: https://utopia.fyi/type/calculator?c=360,18,1.2,1240,20,...|URL:https://utopia.fyi/type/calculator?c=360,18,1.2,1240,20,1.25,5,2,&s=0.75|0.5|0.25,1.5|2|3|4|6,s-l&g=s,l,xl,12|codevoid.de|70]
       
              harrall wrote 20 hours 47 min ago:
              You're not wrong. Increasing font size is one method.
              
              Another method I like to use is to adjust the amount of words per
              paragraph depending on the medium. I will literally write more or
              less just to attain my personal favorite of 3-6 visual lines per
              paragraph.
              
              Or sometimes I will more readily join paragraphs or split them
              more often in a text just to achieve my target.
              
              Decreasing width is actually just really easy and also works
              really well when the type of content can vary.
              
              All of this seems like some serious overkill attention to detail
              I know, but I guess it's a big deal for some people. For example,
              most people don't really care about dressing nice regularly
              anymore when they get older and marry because it frankly doesn't
              matter anymore (and they're totally right), but people who like
              fashion still care up until the end.
       
          flufluflufluffy wrote 1 day ago:
          Probably to not have incredibly wide paragraphs. I will say though, I
          set my browser to always display HN at like 150% zoom or something
          like that. They definitely could make the default font size larger.
          On mobile it looks fine though.
       
            notepad0x90 wrote 1 day ago:
            I have HN on 170% zoom too. this a bad design pattern. I shouldn't
            have to zoom in on every site. Either increasing the font or making
            sure the content is always at least 50% of the page would be great
            for me.
       
          Menu_Overview wrote 1 day ago:
          Often times that is to create a comfortable reading width. ( [1] )
          
  HTML    [1]: https://ux.stackexchange.com/questions/108801/what-is-the-be...
       
            Telaneo wrote 13 hours 3 min ago:
            This breaks so hard with my own preferences it hard to believe it
            to be true, and relevant studies aren't all that convincing. The
            few websites I find that go to 150/200+ character lines are a
            blessing to read when I do. I only get to do this on my desktop on
            very odd sites or completely unstyled HTML pages (and there you
            don't get line-spacing, which is something I do want. I should
            probably write a script to fix that), and Hacker News, and I never
            want that to go away.
            
            This wouldn't be the first thing I'm just weird about. Similarly, I
            find reading justified text to be just horrible, as I constantly
            lose track of what line I'm on. This one I believe has been
            debunked and raised as a genuine accessibility concern, but not all
            parts of the world have gotten around to recognising that. I'm also
            not a fan of serifed fonts, even in books. I'm not sure if there
            have been any studies made about that, as the serifs are supposed
            to be there to aid reading when printed on paper, but I
            consistently find a good sans-serif font to be better in all cases.
       
            AlienRobot wrote 22 hours 11 min ago:
            I wonder if this research is really valid. It was published 20
            years ago, there is nothing in the abstract about arcdegrees and I
            can't read the full thing, and it's cited with zero consideration
            for the actual content being presented.
            
            If Wikipedia had 70 characters per line I would never read it.
       
              notepad0x90 wrote 22 hours 1 min ago:
              That's a good point, 2560p wasn't a popular resolution back then.
              I'm sure people browsing in 4k suffer worse.
       
          majora2007 wrote 1 day ago:
          I'm not sure, but when I was working with UX years ago, they designed
          everything for a fixed width and centered it in the screen.
          
          Kinda like how HackerNews is, it's centered and doesn't scale to my
          full width of the monitor.
       
            notepad0x90 wrote 23 hours 57 min ago:
            I understand not using the full width, but unless you zoom in, it
            feels like I'm viewing tiny text on a smart phone in portrait mode.
            
            You would think browsers themselves would handle the rest, if the
            website simply specified "center the content div with 60% width" or
            something like that.
       
        flymasterv wrote 1 day ago:
        I still don’t understand what people think they’re accomplishing
        with the lang attribute. It’s trivial to determine the language, and
        in the cases where it isn’t, it’s not trivial for the reader,
        either.
       
          maxeda wrote 20 hours 35 min ago:
          Another good reason for using the lang attribute is that it makes it
          possible to enable automatic hyphenation.
       
          janwillemb wrote 23 hours 59 min ago:
          Doesn't it state this in the article?
          
          > Browsers, search engines, assistive technologies, etc. can leverage
          it to:
          
          > - Get pronunciation and voice right for screen readers
          
          > - Improve indexing and translation accuracy
          
          > - Apply locale-specific tools (e.g. spell-checking)
       
            flymasterv wrote 23 hours 43 min ago:
            It states the cargo culted reasons, but not the actual truth.
            
            1) Pronounciation is either solved by a) automatic language
            detection, or b) doesn't matter. If I am reading a book, and I see
            text in a language I recognize, I will pronounce it correctly, just
            like the screen reader will. If I see text in a language I don't
            recognize, I won't pronounce it correctly, and neither will the
            screen reader.    There's no benefit to my screen reader pronouncing
            Hungarian correctly to me, a person who doesn't speak Hungarian. On
            the off chance that the screen reader gets it wrong, even though I
            do speak Hungarian, I can certainly tell that I'm hearing
            english-pronounced hungarian. But there's no reason that the screen
            reader will get it wrong, because "Mit csináljunk, hogy boldogok
            legyünk?" isn't ambiguous. It's just simply Hungarian, and if I
            have a Hungarian screen reader installed, it's trivial to figure
            that out.
            
            2) Again, if you can translate it, you already know what language
            it is in. If you don't know what language it is in, then you can't
            read it from a book, either.
            
            3) See above. Locale is mildly useful, but the example linked in
            the article was strictly language, and spell checking will either
            a) fail, in the case of en-US/en-UK, or b) be obvious, in the case
            of 1) above.
            
            The lang attribute adds nothing to the process.
       
              bilkow wrote 22 hours 55 min ago:
              Your whole comment assumes language identification is both
              trivial and fail-safe. It is neither and it can get worse if you
              consider e.g. cases where the page has different elements in
              different languages, different languages that are similar.
              
              Even if language identification was very simple, you're still
              putting the burden on the user's tools to identify something the
              writer already knew.
       
                flymasterv wrote 21 hours 15 min ago:
                Language detection (where “language”== one of the 200
                languages that are actually used), IS trivial, given a
                paragraph of text.
                
                And the fact is that the author of the web page doesn’t know
                the language of the content, if there’s anything user
                contributed. Should you have to label every comment on HN as
                “English”? That’s a huge burden on literally every
                internet user. Other written language has never specified its
                language. Imposing data-entry requirements on humans to satisfy
                a computer is never the ideal solution.
       
                  ElectricalUnion wrote 16 hours 28 min ago:
                  I wish this comment was true, but due to a foolish attempt to
                  squish all human charactets to 2 bytes as UCS (that failed
                  and turned into the ugly UTF-16 mess), a disaster called Han
                  Unification was unleashed upon the world, and now out-of-band
                  communication is required to render the correct Han
                  characters in a page and not offend people.
       
                  bilkow wrote 19 hours 47 min ago:
                  > 200 languages that are actually used
                  
                  Do you have any reference of that or are you implying we
                  shouldn't support the other thousands[0] of languages in use
                  just because they don't have a big enough user base?
                  
                  > And the fact is that the author of the web page doesn’t
                  know the language of the content, if there’s anything user
                  contributed. Should you have to label every comment on HN as
                  “English”? That’s a huge burden on literally every
                  internet user.
                  
                  In the case of Hacker News or other pages with user submitted
                  and multi-language content, you can just mark the comments'
                  lang attribute to the empty string, which means unknown and
                  falls back to detection. Alternatively, it's possible to let
                  the user select the language (defaulting to their last used
                  or an auto-detected one), Mastodon and BlueSky do that. For
                  single language forums and sites with no user-generated
                  content, it's fine to leave everything as the site language.
                  
                  > Other written language has never specified its language.
                  Imposing data-entry requirements on humans to satisfy a
                  computer is never the ideal solution.
                  
                  There's also no "screen reader" nor "auto translation" in
                  other written language. Setting the content language helps to
                  improve accessibility features that do not exist without
                  computers.
                  
                  [0]
                  
  HTML            [1]: https://www.ethnologue.com/insights/how-many-languag...
       
        daneel_w wrote 1 day ago:
        For clarity and conformity, while optional these days, I insist on
        placing meta information within .
       
        dinkelberg wrote 1 day ago:
        TFA itself has an incorrect DOCTYPE. It’s missing the whitespace
        between "DOCTYPE" and "html". Also, all spaces between HTML attributes
        where removed, although the HTML spec says: "If an attribute using the
        double-quoted attribute syntax is to be followed by another attribute,
        then there must be ASCII whitespace separating the two." ( [1] ) I
        guess the browser gets it anyway. This was probably automatically done
        by an HTML minifier. Actually the minifier could have generated less
        bytes by using the unquoted attribute value syntax (`lang=en-us id=top`
        rather than `lang="en-us"id="top"`).
        
        Edit: In the `minify-html` Rust crate you can specify
        "enable_possibly_noncompliant", which leads to such things. They are
        exploiting the fact that HTML parsers have to accept this per the
        (parsing) spec even though it's not valid HTML according to the
        (authoring) spec.
        
  HTML  [1]: https://html.spec.whatwg.org/multipage/syntax.html#attributes-...
       
          zamadatix wrote 16 hours 45 min ago:
          For anyone else furiously going back and forth between TFA and this
          comment: they mean the actual website of TFA has these errors, not
          the content of TFA.
       
          9029 wrote 23 hours 42 min ago:
          Maybe a dumb question but I have always wondered, why does the
          (authoring?) spec not consider e.g. "doctypehtml" as valid HTML if
          compliant parsers have to support it anyway? Why allow this situation
          where non-compliant HTML is guaranteed to work anyway on a compliant
          parser?
       
            AlienRobot wrote 22 hours 14 min ago:
            Same reason  is invalid.
       
            LegionMammal978 wrote 22 hours 30 min ago:
            It's considered a parse error [0]: it basically says that a parser
            may reject the document entirely if it occurs, but if it accepts
            the document, then it must act as if a space is present. In
            practice, browsers want to ignore all parse errors and accept as
            many documents as possible.
            
            [0]
            
  HTML      [1]: https://html.spec.whatwg.org/multipage/parsing.html#parse-...
       
              9029 wrote 22 hours 10 min ago:
              > a parser may reject the document entirely if it occurs
              
              Ah, that's what I was missing. Thanks! The relevant part of the
              spec:
              
              > user agents, while parsing an HTML document, may abort the
              parser at the first parse error that they encounter for which
              they do not wish to apply the rules described in this
              specification.
              
              ( [1] )
              
  HTML        [1]: https://html.spec.whatwg.org/multipage/parsing.html#pars...
       
            HWR_14 wrote 23 hours 31 min ago:
            Because there are multiple doctypes you can use. The same reason
            "varx" is not valid and must be written "var x".
       
        teekert wrote 1 day ago:
        Nice, the basics again, very good to see. 
        But then:
        
        I know what you’re thinking, I forgot the most important snippet of
        them all for writing HTML:
        
        Lol.
        
        -> Ok, thanx, now I do feel like I'm missing an inside joke.
       
          Ayesh wrote 1 day ago:
          It's a typical pattern in, say react, to have just this scaffolding
          in the HTML and let some frond end framework to build the UI.
       
        Grom_PE wrote 1 day ago:
        I hate how because of iPhone and subsequent mobile phones we have bad
        defaults for webpages so we're stuck with that viewport meta forever.
        
        If only we had UTF-8 as a default encoding in HTML5 specs too.
       
          jonhohle wrote 1 day ago:
          I came here to say the same regarding UTF-8. What a huge miss and
          long overdue.
          
          I’ve had my default encoding set to UTF-8 for probably 20 years at
          this point, so I often miss some encoding bugs, but then hit others.
       
        reconnecting wrote 1 day ago:
        I wish I could use this one day again to make my HTML work as expected.
       
        orliesaurus wrote 1 day ago:
        Quirks quirks aside there are other ways to tame old markup...
        
        If a site won't update itself you can... use a user stylesheet or
        extension to fix things like font sizes and colors without waiting for
        the maintainer...
        
        BUT for scripts that rely on CSS behaviors there is a simple check...
        test document.compatMode and bail when it's not what you expect...
        sometimes adding a wrapper element and extracting the contents with a
        Range keeps the page intact...
        
        ALSO adding semantic elements and ARIA roles goes a long way for
        accessibility... it costs little and helps screen readers navigate...
        
        Would love to see more community hacks that improve usability without
        rewriting the whole thing...
       
        brianzelip wrote 1 day ago:
        > ``
        
        The author might consider instead:
        
        ``
       
          Etheryte wrote 18 hours 45 min ago:
          Those two mean two very different things though, why would the author
          do that? Please see RFC 5646 [0], "en" means English without any
          further constraints, "en-US" means English as used in the United
          States.
          
          [0]
          
  HTML    [1]: https://datatracker.ietf.org/doc/html/rfc5646
       
          childintime wrote 22 hours 37 min ago:
          It's time for an "en-INTL" (or similar) for international english,
          that is mostly "en-US", but implies a US-International keyboard and
          removes americanisms, like Logical Punctuation in quotes [1]. Then AI
          can start writing for a wider and much larger public (and can also
          default to regular ISO units instead of imperial baby food).
          
          Additionally, it's kind of crazy we are not able to write any
          language with any keyboard, as nowadays we just don't know the idiom
          the person who sits behind the keyboard needs.
          
  HTML    [1]: https://slate.com/human-interest/2011/05/logical-punctuation...
       
            Telaneo wrote 12 hours 54 min ago:
            en-DK is used for this in some cases, giving you English, but with
            metric units and an ISO keyboard among other things.
            
            A dedicated one for International English, or heck, even just
            EU-English, would be great.
            
            The EU websites just use en from what I can tell, but they also
            just use de, fr, sv, rather than specifying country (except pt-PT,
            which makes sense, since pt-BR is very common, but not relevant for
            the EU).
       
            fijiaarone wrote 13 hours 55 min ago:
            We should also enforce a standard where every website has to change
            their content to match the user’s preferred idiomatic diss,
            whether it be “yo momma”, “deez nuts”, “six seven”, or
            a series of hottentot tongue clicks recorded in Ogg Vorbiz.
       
            qingcharles wrote 20 hours 54 min ago:
            Isn't that what "en" on its own should be, though?
       
          mobeigi wrote 1 day ago:
          Interesting.
          
          From what I can tell this allows some screen readers to select
          specific accents. Also the browser can select the appropriate spell
          checker (US English vs British English).
       
        dugmartin wrote 1 day ago:
        I know this was a joke:
        
           
           
        
        but I feel there is a last tag missing:
        
           ...
        
        that will ensure screenreaders skip all your page "chrome" and make
        life much easier for a lot of folks.  As a bonus mark any navigation
        elements inside main using  (or role="navigation").
       
          petecooper wrote 23 hours 54 min ago:
          >I know this was a joke
          
          I'm…missing the joke – could someone explain, please? Thank you.
       
            bitbasher wrote 23 hours 10 min ago:
            It's because "modern" web developers are not writing web pages in
            standard html, css or js. Instead, they use javascript to render
            the entire thing inside a root element.
            
            This is now "standard" but breaks any browser that doesn't (or
            can't) support javascript. It's also a nightmare for SEO,
            accessibility and many other things (like your memory, cpu and
            battery usage).
            
            But hey, it's "modern"!
       
            SomeHacker44 wrote 23 hours 36 min ago:
            Not a front end engineer but I imagine this boilerplate allows the
            JavaScript display engine of choice to be loaded and then rendered
            into that DIV rather than having any content on the page itself.
       
          eska wrote 1 day ago:
          I’m not a blind person but I was curious about once when I tried to
          make a hyper-optimized website. It seemed like the best way to please
          screen readers was to have the navigation HTML come last, but style
          it so it visually comes first (top nav bar on phones, left nav menu
          on wider screens).
       
            marcosdumay wrote 1 day ago:
            Just to say, that makes your site more usable in text browsers too,
            and easier to interact with the keyboard.
            
            I remember HTML has an way to create global shortcuts inside a
            page, so you press a key combination and the cursor moves directly
            to a pre-defined place. If I remember that right, it's recommended
            to add some of those pointing to the menu, the main content, and
            whatever other relevant area you have.
       
            sholladay wrote 1 day ago:
            Props to you for taking the time to test with a screen reader, as
            opposed to simply reading about best practices. Not enough people
            do this. Each screen reader does things a bit differently, so
            testing real behavior is important. It's also worth noting that a
            lot of alternative input/output devices use the same screen reader
            protocols, so it's not only blind people you are helping, but
            anyone with a non-traditional setup.
            
            Navigation should come early in document and tab order. Screen
            readers have shortcuts for quickly jumping around the page and
            skipping things. It's a normal part of the user experience. Some
            screen readers and settings de-prioritize navigation elements in
            favor of reading headings quickly, so if you don't hear the
            navigation right away, it's not necessarily a bug, and there's a
            shortcut to get to it. The most important thing to test is whether
            the screen reader says what you expect it to for dynamic and
            complex components, such as buttons and forms, e.g. does it
            communicate progress, errors, and success? It's usually pretty easy
            to implement, but this is where many apps mess up.
       
              cluckindan wrote 19 hours 3 min ago:
              ”Each screen reader does things a bit differently, so testing
              real behavior is important.”
              
              Correction: each screen reader + os + browser combo does things a
              bit differently, especially on multilanguage React sites. It is a
              full time job to test web sites on screen readers.
              
              If only there was a tool that would comprehensively test all
              combos on all navigation styles (mouse, touch, tabbing, screen
              reader controls, sip and puff joysticks, chin joysticks, eye
              trackers, Braille terminals, etc)… but there isn’t one.
       
            striking wrote 1 day ago:
            You want a hidden "jump to content" link as the first element
            available to tab to.
       
            hnthrowaway121 wrote 1 day ago:
            Wouldn’t that run afoul of other rules like keeping visual order
            and tab order the same? Screen reader users are used to skip links
            & other standard navigation techniques.
       
        jraph wrote 1 day ago:
        >  is what you want for consistent rendering. Or  if you prefer writing
        markup like it’s 1998. Or even  if you eschew all societal norms.
        It’s case-insensitive so they’ll all work.
        
        And  if you want polyglot (X)HTML.
       
          nikeee wrote 23 hours 41 min ago:
          I tend to lower-case all my HTML because it has less entropy and
          therefore can be compressed more effectively.
          
          But in case of modern compression algorithms, some of them come with
          a pre-defined dictionary for websites. These usually contain the
          common stuff like  in its most used form. So doing it like everybody
          else might even make the compression even more effective.
       
          bombcar wrote 1 day ago:
          We need HTML Sophisticated -
       
        aragonite wrote 1 day ago:
        Fun fact: both HN and (no doubt not coincidentally) paulgraham.com ship
        no DOCTYPE and are rendered  in Quirks Mode. You can see this in
        devtools by evaluating `document.compatMode`.
        
        I ran into this because I have a little userscript I inject everywhere
        that helps me copy text in hovered elements (not just links). It does:
        
        [...document.querySelectorAll(":hover")].at(-1)
        
        to grab the innermost hovered element. It works fine on standards-mode
        pages, but it's flaky on quirks-mode pages.
        
        Question: is there any straightforward & clean way as a user to force a
        quirks-mode page to render in standards mode? I know you can do
        something like:
        
        document.write("" + document.documentElement.innerHTML);
        
        but that blows away the entire document & introduces a ton of problems.
        Is there a cleaner trick?
       
          somat wrote 22 hours 47 min ago:
          On that subject I would be fine if the browser always rendered in
          standard mode. or offered a user configuration option to do so.
          
          No need to have the default be compatible with a dead browser.
          
          further thoughts: I just read the mdn quirks page and perhaps I will
          start shipping Content-Type: application/xhtml+xml as I don't really
          like putting the doctype in. It is the one screwball tag and requires
          special casing in my otherwise elegant html output engine.
       
          neRok wrote 1 day ago:
          A uBlock filter can do it: `||news.ycombinator.com/*$replace=/
       
            razster wrote 1 day ago:
            Could also use tampermonkey to do that, also perform the same
            function as OP.
       
          cxr wrote 1 day ago:
          There is a better option, but generally the answer is "no"; the best
          solution would be for WHATWG to define document.compatMode to be
          writable property instead of readonly.
          
          The better option is to create and hold a reference to the old nodes
          (as easy as `var old = document.documentElement`) and then after
          blowing everything away with document.write (with an empty* html
          element; don't serialize the whole tree), re-insert them under the
          new document.documentElement.
          
          * Note that your approach doesn't preserve the attributes on the html
          element; you can fix this by either pro-actively removing the child
          nodes before the document.write call and rely on
          document.documentElement.outerHTML to serialize the attributes just
          as in the original, or you can iterate through the old element's
          attributes and re-set them one-by-one.
       
          rob wrote 1 day ago:
          I wish `dang` would take some time to go through the website and make
          some usability updates. HN still uses a font-size value that usually
          renders to 12px by default as well, making it look insanely small on
          most modern devices, etc.
          
          At quick glance, it looks like they're still using the same CSS that
          was made public ~13 years ago:
          
  HTML    [1]: https://github.com/wting/hackernews/blob/5a3296417d23d1ecc90...
       
            sgarland wrote 2 hours 41 min ago:
            I hesitate to want any changes, but I could maybe get behind
            dynamic font sizing. Maybe.
            
            On mobile it’s fine, on Mac with a Retina display it’s fine;
            the only one where it isn’t is a 4K display rendering at native
            resolution - for that, I have my browser set to 110% zoom, which is
            perfect for me.
            
            So I have a workaround that’s trivial, but I can see the benefit
            of not needing to do that.
       
            onion2k wrote 17 hours 36 min ago:
            Text size is easily fixed in your browser with the zoom setting.
            Chrome will remember the level you use on a per site basis if you
            let it.
       
            nojs wrote 17 hours 42 min ago:
            The font size is perfect for me, and I hope it doesn’t get a
            “usability update”.
       
              afavour wrote 17 hours 11 min ago:
              “I don’t see any reason to accommodate the needs of others
              because I’m just fine”
       
            nine_k wrote 18 hours 10 min ago:
            Shameless plug: I made this userstyle to make HN comfortable to
            handle both on desktop and mobile. Minimal changes (font size,
            triangles, tiny bits of color), makes a huge difference, especially
            on a mobile screen.
            
  HTML      [1]: https://userstyles.world/style/9931/
       
              thelibrarian wrote 16 hours 33 min ago:
              Thanks for that, it works well, and I like the font choice!
              Though personally I found the font-weight a bit light and changed
              it to 400.
       
            cluckindan wrote 19 hours 9 min ago:
            I bet 99.9% of mobile users' hidden posts are accidentally hidden
       
            umanwizard wrote 19 hours 16 min ago:
            The text looks perfectly normal-sized on my laptop.
       
            embedding-shape wrote 22 hours 40 min ago:
            > At quick glance, it looks like they're still using the same CSS
            that was made public ~13 years ago:
            
            It has been changed since then for sure though. A couple of years
            ago the mobile experience was way worse than what it is today, so
            something has clearly changed. I think also some infamous
            "non-wrapping inline code" bug in the CSS was fixed, but can't
            remember if that was months, years or decades ago.
            
            On another note, they're very receptive to emails, and if you have
            specific things you want fixed, and maybe even ideas on how to do
            in a good and proper way, you can email them (hn@ycombinator.com)
            and they'll respond relatively fast, either with a "thanks, good
            idea" or "probably not, here's why". That has been my experience at
            least.
       
            super256 wrote 22 hours 55 min ago:
            Really? I find the font very nice on my Pixel XL. It doesn't take
            too much space unlike all other modern websites.
       
            angiolillo wrote 23 hours 0 min ago:
            Setting aside the relative merits of 12pt vs 16pt font, websites
            ought to respect the user's browser settings by using "rem", but HN
            (mostly[1]) ignores this.
            
            To test, try setting your browser's font size larger or smaller and
            note which websites update and which do not. And besides helping to
            support different user preferences, it's very useful for
            accessibility.
            
            [1] After testing, it looks like the "Reply" and "Help" links
            respect large browser font sizes.
       
              panzi wrote 13 hours 7 min ago:
              Side note: pt != px. 16px == 12pt.
       
            zamadatix wrote 23 hours 37 min ago:
            12 px (13.333 px when in the adapted layout) is a little small -
            and that's a perfectly valid argument without trying to argue we
            should abandon absolute sized fonts in favor of feels.
            
            There is no such thing as a reasonable default size if we stop
            calibrating to physical dimensions. If you choose to use your phone
            at a scaling where what is supposed to be 1" is 0.75" then that's
            on you, not on the website to up the font size for everyone.
       
            marcosdumay wrote 1 day ago:
            No kidding. I've set the zoom level so long ago that I never
            noticed, but if I reset it on HN the text letters use about 2mm of
            width in my standard HD, 21" display.
       
              ErroneousBosh wrote 23 hours 34 min ago:
              > but if I reset it on HN the text letters use about 2mm of width
              in my standard HD, 21" display.
              
              1920x1080 24" screen here, .274mm pitch which is just about
              100dpi. Standard text size in HN is also about 2mm across,
              measured by the simple method of holding a ruler up to the screen
              and guessing.
              
              If you can't read this, you maybe need to get your eyes checked.
              It's likely you need reading glasses. The need for reading
              glasses kind of crept up on me because I either work on kind of
              Landrover-engine-scale components, or grain-of-sugar-scale
              components, the latter viewed down a binocular microscope on my
              SMD rework bench and the former big enough to see quite easily
              ;-)
       
            oskarkk wrote 1 day ago:
            > HN still uses a font-size value that usually renders to 12px by
            default as well, making it look insanely small on most modern
            devices, etc.
            
            On what devices (or browsers?) it renders "insanely small" for you?
            CSS pixels are not physical pixels, they're scaled to 1/96th of an
            inch on desktop computers, for smartphones etc. scaling takes into
            account the shorter typical distance between your eyes and the
            screen (to make the angular size roughly the same), so one CSS
            pixel can span multiple physical pixels on a high-PPI display. Font
            size specified in px should look the same on various devices. HN
            font size feels the same for me on my 32" 4k display (137 PPI), my
            24" display with 94 PPI, and on my smartphone (416 PPI).
       
              gs17 wrote 1 day ago:
              On my MacBook it's not "insanely small", but I zoom to 120% for a
              much better experience. I can read it just fine at the default.
       
                dormento wrote 21 hours 29 min ago:
                On my standard 1080p screen I gotta set it to 200% zoom to be
                comfortable. Still LOTS of content on the screen and no space
                wasted.
       
            Someone1234 wrote 1 day ago:
            I trust dang a lot; but in general I am scared of websites making
            "usability updates."
            
            Modern design trends are going backwards. Tons of spacing around
            everything, super low information density, designed for touch first
            (i.e. giant hit-targets), and tons of other things that were
            considered bad practice just ten years ago.
            
            So HN has its quirks, but I'd take what it is over what most
            20-something designers would turn it into. See old.reddit Vs.
            new.reddit or even their app.
       
              dlisboa wrote 19 hours 44 min ago:
              There's nothing trendy about making sure HN renders like a page
              from 15 years ago should. Relative font sizes are just so basic
              they should count as a bug fix and not "usability update".
       
              reactordev wrote 1 day ago:
              Overall I would agree but I also agree with the above commenter.
              It’s ok for mobile but on a desktop view it’s very small when
              viewed at anything larger than 1080p. Zoom works but doesn’t
              stick. A simple change to the font size in css will make it
              legible for mobile, desktop, terminal, or space… font-size:2vw
              or something that scales.
       
                5- wrote 18 hours 53 min ago:
                > Zoom works but doesn’t stick.
                
                perhaps try using a user agent that remembers your settings?
                e.g. firefox
       
                  reactordev wrote 7 hours 53 min ago:
                  Perhaps not recommend workarounds to lack of utilizing
                  standards.
       
                cluckindan wrote 19 hours 7 min ago:
                It’s not ok for mobile. Misclicks all around if you don’t
                first pinch zoom to what you are trying to click.
       
                  hunter2_ wrote 16 hours 21 min ago:
                  Indeed, the vast majority of things I've flagged or hidden
                  have been the accidental result of skipping that extra step
                  of zooming.
       
            ano-ther wrote 1 day ago:
            Please don’t. HN has just the right information density with its
            small default font size. In most browsers it is adjustable. And you
            can pinch-zoom if you’re having trouble hitting the right link.
            
            None of the ”content needs white space and large fonts to
            breathe“ stuff or having to click to see a reply like on other
            sites. That just complicates interactions.
            
            And I am posting this on an iPhone SE while my sight has started to
            degrade from age.
       
              ErroneousBosh wrote 23 hours 33 min ago:
              Content does need white space.
              
              HN has a good amount of white space. Much more would be too much,
              much less would be not enough.
       
              rob wrote 1 day ago:
              Yeah, I'm really asking for tons of whitespace and everything to
              breathe sooooo much by asking for the default font size to be a
              browser default (16px) and updated to match most modern display
              resolutions in 2025, not 2006 when it was created.
              
              HN is the only site I have to increase the zoom level, and others
              below are doing the same thing as me. But it must be us with the
              issues. Obviously PG knew best in 2006 for decades to come.
       
                8note wrote 20 hours 46 min ago:
                on mobile at least, i find thati can frequently zoom in, but
                can almost never zoom out, so smaller text allows for more
                accessibility than bigger text
       
                  hunter2_ wrote 16 hours 13 min ago:
                  Browser (and OS) zoom settings are for accessibility; use
                  that to zoom out if you've got the eyes for it. Pinching is
                  more about exploring something not expected to be readily
                  seen (and undersized touch targets).
       
                torstenvl wrote 22 hours 57 min ago:
                Don't do this.
       
                  rob wrote 22 hours 44 min ago:
                  I agree, don't set the default font size to ~12px equiv in
                  2025.
       
                JadeNB wrote 23 hours 21 min ago:
                You're obviously being sarcastic, but I don't think that it's a
                given that "those are old font-size defaults" means "those are
                bad font-size defaults."  I like the default HN size.  There's
                no reason that my preference should override yours, but neither
                is there any reason that yours should override mine, and I
                think "that's how the other sites are" intentionally doesn't
                describe the HN culture, so it need not describe the HN HTML.
       
                Izkata wrote 1 day ago:
                On the flipside, HN is the only site I don't have to zoom out
                of to keep it comfortable. Most sit at 90% with a rare few at
                80%.
                
                16px is just massive.
       
                  ryeights wrote 23 hours 50 min ago:
                  Sounds like your display scaling is a little out of whack?
       
                    hunter2_ wrote 16 hours 17 min ago:
                    Yeah, this is like keeping a sound system equalized for one
                    album and asserting that modern mastering is always badly
                    equalized. Tune the system to the standard, and adjust for
                    the oddball until it's remastered.
       
                      lioeters wrote 14 hours 10 min ago:
                      Except we all know what happened to the "standard" with
                      the Loudness War.
       
            robertlagrant wrote 1 day ago:
            I'm sure they accept PRs, although it can be tricky to evaluate the
            effect a CSS change will have on a broad range of devices.
       
            martin-t wrote 1 day ago:
            I find it exactly the right size on both PC and phone.
            
            There's a trend to make fonts bigger but I never understood why. Do
            people really have trouble reading it?
            
            I prefer seeing more information at the same time, when I used
            Discord (on PC), I even switched to IRC mode and made the font
            smaller so that more text would fit.
       
              zachrip wrote 23 hours 55 min ago:
              I'm low vision and I have to zoom to 175% on HN to read
              comfortably, this is basically the only site I do to this
              extreme.
       
              trenchpilgrim wrote 1 day ago:
              I have mild vision issues and have to blow up the default font
              size quite a bit to read comfortably. Everyone has different
              eyes, and vision can change a lot with age.
       
              jermaustin1 wrote 1 day ago:
              I have HN zoomed to 150% on my screens that are between 32 and 36
              inches from my eyeballs when sitting upright at my desk.
              
              I don't really have to do the same elsewhere, so I think the 12px
              font might be just a bit too small for modern 4k devices.
       
              pletnes wrote 1 day ago:
              Even better: it scales nicely with the browser’s zoom setting.
       
              askonomm wrote 1 day ago:
              I'm assuming you have a rather small resolution display? On a 27"
              4k display, scaled to 150%, the font is quite tiny, to the point
              where the textarea I currently type this in (which uses the
              browsers default font size) is about 3 times the perceivable size
              in comparison to the HN comments themselves.
       
                martin-t wrote 1 day ago:
                1920x1080 and 24 inches
                
                Maybe the issue is not scaling according to DPI?
                
                OTOH, people with 30+ inch screens probably sit a bit further
                away to be able to see everything without moving their head so
                it makes sense that even sites which take DPI into account use
                larger fonts because it's not really about how large something
                is physically on the screen but about the angular size relative
                to the eye.
       
                  Izkata wrote 1 day ago:
                  Yeah, one of the other cousin comments mentions 36 inches
                  away. I don't think they realize just how far outliers they
                  are. Of course you have to make everything huge when your
                  screen is so much further away than normal.
       
                rob wrote 1 day ago:
                Agreed. I'm on an Apple Thunderbolt Display (2560x1440) and I'm
                also scaled up to 150%.
                
                I'm not asking for some major, crazy redesign. 16px is the
                browser default and most websites aren't using tiny, small font
                sizes like 12px any longer.
                
                The only reason HN is using it is because `pg` made it that in
                2006, at a time when it was normal and made sense.
       
                  askonomm wrote 1 day ago:
                  Yup, and these days we have relative units in CSS such that
                  we no longer need to hardcode pixels, so everyone wins (em,
                  rem). That way people can get usability according to the
                  browsers defaults, which make the whole thing user
                  configurable.
       
        irarelycomment wrote 1 day ago:
        Similar vibes to
        
  HTML  [1]: https://j4e.name/articles/a-minimal-valid-html5-document/
       
        lapcat wrote 1 day ago:
        width=device-width is actually redundant and cargo culting. All you
        need is initial-scale. I explain in a bit more detail here:
        
  HTML  [1]: https://news.ycombinator.com/item?id=36112889
       
          qingcharles wrote 20 hours 46 min ago:
          I've read your post. I'm going to test this on some crappy devices
          this week to confirm and then update my boilerplate.
       
          cxr wrote 1 day ago:
          outerHTML is an attribute of Element and DocumentFragment is not an
          Element.
          
          Where do the standards say it ought to work?
       
        ilaksh wrote 1 day ago:
        Anyone else prefer to use web components without bundling?
        
        I probably should not admit this, but I have been using Lit Elements
        with raw JavaScript code. Because I stopped using autocomplete awhile
        ago.
        
        I guess not using TypeScript at this point is basically the equivalent
        for many people these days of saying that I use punch cards.
       
          zeroq wrote 13 hours 3 min ago:
          > not using TypeScript at this point is basically the equivalent for
          many people these days of saying that I use punch cards
          
          I very much enjoy writing no-build, plain vanilla JS for the sake of
          simplicity and ability to simply launch a project by dragging HTML
          file onto a browser. Not to mention the power of making changes with
          notepad instead of needing whole toolchain on your system.
       
          LelouBil wrote 16 hours 51 min ago:
          Stopped using autocomplete ? I want to hear more about this.
          
          I could never.
       
          WorldMaker wrote 21 hours 52 min ago:
          I've been leaning that direction more and more every year. ESM
          loading in browsers is really good at this point (with and without
          HTTP/2+). Bundle-free living is nice now.
          
          Even frameworks with more dependencies bundling/vendoring just your
          dependencies at package upgrade time and using an importmap to load
          them is a good experience.
          
          I'm not giving up Typescript at this point, but Typescript configured
          to modern `"target"` options where it is doing mostly just type
          stripping is a good experience, especially in a simple `--watch`
          loop.
       
          mock-possum wrote 1 day ago:
          God yes, as little tool chain as I can get away with.
       
          dinkleberg wrote 1 day ago:
          Even with TS, if I’m doing web components rather than a full
          framework I prefer not bundling. That way I can have each page load
          the exact components it needs. And with http/2 I’m happy to have
          each file separate. Just hash them and set an immutable cache header
          so it even when I make changes the users only have to pull the new
          version of things that actually changed.
       
            zeroq wrote 13 hours 8 min ago:
            This.
            
            I'm old enough to have a first hand experience of building a Flash
            website that required to load couple hundred tiny xml files for
            configuration only to find out that some ~300kb was taking couple
            of minutes to load because of limited connection pool in old http.
            
            Back then bundling and overly complicated build steps were not yet
            invented, so instead of serving one large XML (which would work out
            of the box, as there was a root xml and certain nodes instead of
            having data were linking to external files) I quickly decided to
            implement zip compression and bundle the package that way.
            
            Fast forward to 2025 when most devs need an external library to
            check if number isEven and the simplest project need a toolchain
            that's more complicated that the whole Apollo project.
       
          brazukadev wrote 1 day ago:
          > Anyone else prefer to use web components without bundling?
          
          Yes! not only that but without ShadowDOM as well.
       
          nonethewiser wrote 1 day ago:
          Can't say I generally agree with dropping TS for JS but I suppose
          it's easier to argue when you are working on smaller projects. But
          here is someone that agrees with you with less qualification than
          that [1] I was introduced to this decision from the Lex Fridman/DHH
          podcast. He talked a lot about typescript making meta programming
          very hard. I can see how that would be the case but I don't really
          understand what sort of meta programming you can do with JS. The
          general dynamic-ness of it I get.
          
  HTML    [1]: https://world.hey.com/dhh/turbo-8-is-dropping-typescript-701...
       
            mariusor wrote 1 day ago:
            Luckily Lit supports typescript so you wouldn't need to drop it.
       
          VPenkov wrote 1 day ago:
          37 Signals [0] famously uses their own Stimulus [1] framework on most
          of their products. Their CEO is a proponent of the whole no-build
          approach because of the additional complexity it adds, and because it
          makes it difficult for people to pop your code and learn from it.
          
          [0]: [1]:
          
  HTML    [1]: https://basecamp.com/
  HTML    [2]: https://stimulus.hotwired.dev/
       
            evilduck wrote 1 day ago:
            It's impossible to look at a Stimulus based site (or any similar
            SSR/hypermedia app) and learn anything useful beyond superficial
            web design from them because all of the meaningful work is being
            done on the other side of the network calls. Seeing a "data-action"
            or a "hx-swap" in the author's original text doesn't really help
            anyone learn anything without server code in hand. That basically
            means the point is moot because if it's an internal team member or
            open source wanting to learn from it, the original source vs.
            minified source would also be available.
            
            It's also more complex to do JS builds in Ruby when Ruby isn't up
            to the task of doing builds performantly and the only good option
            is calling out to other binaries. That can also be viewed from the
            outside as "we painted ourselves into a corner, and now we will
            discuss the virtues of standing in corners". Compared to Bun, this
            feels like a dated perspective.
            
            DHH has had a lot of opinions, he's not wrong on many things but
            he's also not universally right for all scenarios either and the
            world moved past him back in like 2010.
       
              VPenkov wrote 18 hours 1 min ago:
              Well you do learn that a no-build process can work at some scale,
              and you can see what tech stack is used and roughly how it works.
              
              But regardless, I didn't mean to make any argument for or against
              this, I'm saying this was one of the points DHH made at some
              point.
       
            christophilus wrote 1 day ago:
            Dunno. You can build without minifying if you want it to be
            (mostly) readable. I wouldn’t want to give up static typing again
            in my career.
       
        isolay wrote 1 day ago:
        The "without meta utf-8" part of course depends on your browser's
        default encoding.
       
          kevin_thibedeau wrote 1 day ago:
          What mainstream browsers aren't defaulting to utf-8 in 2025?
       
            Eric_WVGG wrote 1 day ago:
            I spent about half an hour trying to figure out why some JSON in my
            browser was rendering è incorrectly, despite the output code and
            downloaded files being seemingly perfect. I came to the conclusion
            that the browsers (Safari and Chrome) don't use UTF-8 as the
            default renderer for everything and moved on.
            
            This should be fixed, though.
       
            layer8 wrote 1 day ago:
            I wouldn’t be surprised if they don’t for pages loaded from
            local file URIs.
       
            akho wrote 1 day ago:
            html5 does not even allow any other values in . I think you need to
            use a different doctype to get what the screenshot shows.
       
              layer8 wrote 1 day ago:
              While true, they also require user agents to support other
              encodings specified that way: [1] Another funny thing here is
              that they say “but not limited to” (the listed encodings),
              but then say “must not support other encodings” (than the
              listed ones).
              
  HTML        [1]: https://html.spec.whatwg.org/multipage/parsing.html#char...
       
                shiomiru wrote 1 day ago:
                It says
                
                > the encodings defined in Encoding, including, but not limited
                to
                
                where "Encoding" refers to [1] (probably that
                should be a link.)  So it just means "the other spec defines at
                least these,
                but maybe others too."    (e.g. EUC-JP is included in Encoding
                but not listed
                in HTML.)
                
  HTML          [1]: https://encoding.spec.whatwg.org
       
                  layer8 wrote 22 hours 47 min ago:
                  Ah, I understood it to refer to encoding from the preceding
                  section.
       
            naniwaduni wrote 1 day ago:
            All of them, pretty much.
       
        hlava wrote 1 day ago:
        It's 2025, the end of it. Is this really necessary to share?
       
          OuterVale wrote 1 day ago:
          When sharing this post on his social media accounts, Jim prefixed the
          link with: 'Sometimes its cathartic to just blog about really basic,
          (probably?) obvious stuff'
       
          nonethewiser wrote 1 day ago:
          Feels even more important to share honestly. It's unexamined
          boilerplate at this point.
       
          troupo wrote 1 day ago:
          Every day you can expect 10000 people learning a thing you thought
          everyone knew: [1] To quote the alt text: "Saying 'what kind of an
          idiot doesn't know about the Yellowstone supervolcano' is so much
          more boring than telling someone about the Yellowstone supervolcano
          for the first time."
          
  HTML    [1]: https://xkcd.com/1053/
       
            janwillemb wrote 23 hours 47 min ago:
            Thanks! I didn't know that one.
            
            I had a teacher who became angry when a question was asked about a
            subject he felt students should already be knowledgeable about.
            "YOU ARE IN xTH GRADE AND STILL DON'T KNOW THIS?!" (intentional
            shouting uppercase). The fact that you learned it yesterday doesn't
            mean all humans in the world also learned it yesterday. Ask
            questions, always. Explain, always.
       
              spc476 wrote 19 hours 45 min ago:
              Such questions can be jarring though.  I remember my "Unix
              Systems Programming" class in college.    It's a third year course.
               The instructor was describing the layout of a process in memory,
              "here's the text segment, the data segment, etc." when a student
              asked, "Where do the comments go?"
       
                janwillemb wrote 9 hours 57 min ago:
                :) true. I'm a teacher myself. I never dismiss questions, but I
                do get discouraged sometimes.
       
            Skeime wrote 1 day ago:
            And here I was, thinking everybody already knew XKCD 1053 ...
       
            allknowingfrog wrote 1 day ago:
            XKCD 1053 is a way of life. I think about it all the time, and it
            has made me a better human being.
       
          4ndrewl wrote 1 day ago:
          Yes. Knowledge is not equally distributed.
       
        Aransentin wrote 1 day ago:
        Note that  and    auto-close and don't need to be terminated.
        
        Also, wrapping the  tags in an actual  is optional.
        
        You also don't need the quotes as long the attribute doesn't have
        spaces or the like;  is OK.
        
        (kind of pointless as the average website fetches a bazillion bytes of
        javascript for every page load nowadays, but sometimes slimming things
        down as much as possible can be fun and satisfying)
       
          busymom0 wrote 18 hours 5 min ago:
          If I don't close something I opened, I feel weird.
       
          qingcharles wrote 20 hours 53 min ago:
          > Note that  and  auto-close and don't need to be terminated.
          
          You monster.
       
          tannhaeuser wrote 23 hours 52 min ago:
          Not only do html and body auto-close, their tags including
          start-element tags can be omitted alltogether:
          
              Shortest valid doc
              
          
          Body text following here
          
          (cf explainer slides at [1] for the exact tag inferences SGML/HTML
          does to arrive at the fully tagged doc)
          
          [1] (linked from [2] )
          
  HTML    [1]: https://sgmljs.sgml.net/docs/html5-dtd-slides-wrapper.html
  HTML    [2]: https://sgmljs.sgml.net/blog/blog1701.html
       
          alt187 wrote 1 day ago:
          I'm not sure I'd call keeping the  tag open satisfying but it is a
          fun fact.
       
          nodesocket wrote 1 day ago:
          Didn't know you can omit  ..  but I prefer for clarify to keep them.
       
            bentley wrote 1 day ago:
            Do you also spell out the implicit  in all your tables for clarity?
       
              tracker1 wrote 17 hours 23 min ago:
              Sometimes... especially if a single record displays across more
              than a single row.
              
              I almost always use thead.
       
              christophilus wrote 1 day ago:
              Yes. Explicit is almost always better than implicit, in my
              experience.
       
              ndegruchy wrote 1 day ago:
              I do.
              
              `` and ``, too, if they're needed. I try to use all the free
              stuff that HTML gives you without needing to reach for JS. It's a
              surprising amount. Coupled with CSS and you can get pretty far
              without needing anything. Even just having `` with minimal JS
              enables a ton of 'interactivity'.
       
          zelphirkalt wrote 1 day ago:
          This kind of thing will always just feel shoddy to me. It is not much
          work to properly close a tag. The number of bytes saved is
          negligible, compared to basically any other aspect of a website.
          Avoiding not needed div spam already would save more. Or for example
          making sure CSS is not bloated. And of course avoiding downloading
          3MB of JS.
          
          What this achieves is making the syntax more irregular and harder to
          parse. I wish all these tolerances wouldn't exist in HTML5 and
          browsers simply showed an error, instead of being lenient. It would
          greatly simplify browser code and HTML spec.
       
            bazoom42 wrote 1 day ago:
            > I wish all these tolerances wouldn't exist in HTML5 and browsers
            simply showed an error, instead of being lenient.
            
            Who would want to use a browser which would prevent many currently
            valid pages from being shown?
       
              zelphirkalt wrote 23 hours 4 min ago:
              I mean, I am obviously talking about a fictive scenario, a
              somewhat better timeline/universe. In such a scenario, the shoddy
              practices of not properly closing tags and leaning on leniency in
              browser parsing and sophisticated fallbacks and all that would
              not have become a practice and those many currently valid
              websites would mostly not have been created, because as someone
              tried to create them, the browsers would have told them no. Then
              those people would revise their code, and end up with clean,
              easier to parse code/documents, and we wouldn't have all these
              edge and special cases in our standards.
              
              Also obviously that's unfortunately not the case today in our
              real world. Doesn't mean I cannot wish things were different.
       
            shiomiru wrote 1 day ago:
            > It would greatly simplify browser code and HTML spec.
            
            I doubt it would make a dent - e.g. in the "skipping " case, you'd
            be
            replacing the error recovery mechanism of "jump to the next
            insertion mode"
            with "display an error", but a) you'd still need the code path to
            handle
            it, b) now you're in the business of producing good error messages
            which
            is notoriously difficult.
            
            Something that would actually make the parser a lot simpler is
            removing
            document.write, which has been obsolete ever since the introduction
            of the
            DOM and whose main remaining real world use-case seems to be ad
            delivery.
            (If it's not clear why this would help, consider that
            document.write can
            write scripts that call document.write, etc.)
       
            bentley wrote 1 day ago:
            Implicit elements and end tags have been a part of HTML since the
            very beginning. They introduce zero ambiguity to the language,
            they’re very widely used, and any parser incapable of handling
            them violates the spec and would be incapable of handling piles of
            real‐world strict, standards‐compliant HTML.
            
            > I wish all these tolerances wouldn't exist in HTML5 and browsers
            simply showed an error, instead of being lenient.
            
            They (W3C) tried that with XHTML. It was soundly rejected by
            webpage authors and by browser vendors. Nobody wants the Yellow
            Screen of Death.
            
  HTML      [1]: https://en.wikipedia.org/wiki/File:Yellow_screen_of_death....
       
              tracker1 wrote 17 hours 35 min ago:
              XHTML in practice was too strict and tended to break a few other
              things (by design) for better or worse, so nobody used it...
              
              That said, actually writing HTML that can be parsed via an XML
              parser is generally a good, neighborly thing to do, as it allows
              for easier scraping and parsing through browsers and non-browser
              applications alike.  For that matter, I will also add additional
              data-* attributes to elements just to make testing (and scraping)
              easier.
       
              alwillis wrote 1 day ago:
              I didn't have a problem with XHTML back in the day; it tool a
              while to unlearn it; I would instinctively close those tags: ,
              etc.
              
              It actually the XHTML 2.0 specification [1] that discarded
              backwards compatibility with HTML 4 was the straw that broke the
              camel's back. No more forms as we knew them, for example; we were
              supposed to use XFORMS.
              
              That's when WHATWG was formed and broke with the W3C and created
              HTML5.
              
              Thank goodness.
              
              [1] 
              
  HTML        [1]: https://en.wikipedia.org/wiki/XHTML#XHTML_2.0
       
                WorldMaker wrote 21 hours 26 min ago:
                XHTML 2.0 had a bunch of good ideas and a lot of them got
                "backported" into HTML 5 over the years.
                
                XHTML 2.0 didn't even really discard backwards-compatibility
                that much: it had its compatibility story baked in with XML
                Namespaces. You could embed XHTML 1.0 in an XHTML 2.0 document
                just as you can still embed SVG or MathML in HTML 5. XForms was
                expected to take a few more years and people were expecting to
                still embed XHTML 1.0 forms for a while into XHTML 2.0's life.
                
                At least from my outside observer perspective, the formation of
                WHATWG was more a proxy war between the view of the web as a
                document platform versus the view of the web as an app
                platform. XHTML 2.0 wanted a stronger document-oriented web.
                
                (Also, XForms had some good ideas, too. Some of what people
                want in "forms helpers" when they are asking for something like
                HTMX to standardized in browsers were a part of XForms such as
                JS-less fetch/XHR with in-place refresh for form submits. Some
                of what HTML 5 slowly added in terms of INPUT tag validation
                are also sort of "backports" from XForms, albeit with no
                dependency on XSD.)
       
              haskellshill wrote 1 day ago:
              > They introduce zero ambiguity to the language
              
              Well, to parsing it for machines yes, but for humans writing and
              reading it they are helpful. For example, if you have
              
                  
              
               foo
                  
              
               bar
              
              and change it to
              
                   foo
                   bar
              
              suddenly you've got a syntax error (or some quirks mode rendering
              with nested divs).
              
              The "redundancy" of closing the tags acts basically like a
              checksum protecting against the "background radiation" of human
              editing.
              And if you're writing raw HTML without an editor that can
              autocomplete the closing tags then you're doing it wrong anyway.
              Yes that used to be common before and yes it's a useful backwards
              compatibility  / newbie friendly feature for the language, but
              that doesn't mean you should use it if you know what you're
              doing.
       
                recursive wrote 1 day ago:
                It sounds like you're headed towards XHTML.  The rise and fall
                of XHTML is well documented and you can binge the whole thing
                if you're so inclined.
                
                But my summarization is that the reason it doesn't work is that
                strict document specs are too strict for humans.  And at a time
                when there was legitimate browser competition, the one that
                made a "best effort" to render invalid content was the winner.
       
                  haskellshill wrote 23 hours 57 min ago:
                  The merits and drawbacks of XHTML has already been discussed
                  elsewhere in the thread and I am well aware of it.
                  
                  >  And at a time when there was legitimate browser
                  competition, the one that made a "best effort" to render
                  invalid content was the winner.
                  
                  Yes, my point is that there is no reason to still write
                  "invalid" code just because it's supported for backwards
                  compatibility reasons. It sounds like you ignored 90% of my
                  comment, or perhaps you replied to the wrong guy?
       
                    recursive wrote 23 hours 50 min ago:
                    I'm a stickling pedant for HTML validity, but close tags on
                    
                     and 
                    
                    *  are optional by spec.  Close tags for , , and  are
                    prohibited.  XML-like self-closing trailing slashes
                    explicitly have no meaning in XML.
                    
                    Close tags for    are required.  But if people start treating
                    it like XML, they write .  But that fails, because the
                    script element requires closure, and that slash has no
                    meaning in XML.
                    
                    I think validity matters, but you have to measure validity
                    according to the actual spec, not what you wish it was, or
                    should have been.  There's no substitute for actually
                    knowing the real rules.
       
                      sgarland wrote 2 hours 32 min ago:
                      IMO, all of those make logical sense. If you’re
                      inserting a line break or literal line, it can be thought
                      of as a 1-dimensional object, which cannot enclose
                      anything. If you want another one, insert another one.
                      
                      In contrast, paragraphs and lists do enclose content, so
                      IMO they should have clear delineations - if nothing
                      else, to make visually understanding the code more clear.
                      
                      I’m also sure that someone will now reference another
                      HTML attribute I didn’t think about that breaks my
                      analogy.
       
                      haskellshill wrote 23 hours 34 min ago:
                      Are you misunderstanding on purpose? I am aware they are
                      optional. I am arguing that there is no reason to omit
                      them from your HTML. Whitespace is (mostly) optional in
                      C, does that mean it's a good idea to omit it from your
                      programs? Of course a br tag needs no closing tag because
                      there is no content inside it. How exactly is that an
                      argument for omitting the closing p tag? The XML standard
                      has no relevance to the current discussion because I'm
                      not arguing for "starting to treat it like XML".
       
                        recursive wrote 22 hours 35 min ago:
                        I'm beginning to think I'm misunderstanding, but it's
                        not on purpose.
                        
                        Including closing tags as a general rule might make
                        readers think that they can rely on their presence. 
                        Also, in some cases they are prohibited.  So you can't
                        achieve a simple evenly applied rule anyway.
       
                          haskellshill wrote 19 hours 52 min ago:
                          Well, just because something is allowed by the syntax
                          does not mean it's a good idea, that's why pretty
                          much every language has linters.
                          
                          And I do think there's an evenly applied rule,
                          namely: always explicitly close all non-void
                          elements. There are only 14 void elements anyway, so
                          it's not too much to expect readers to know them. In
                          your own words "there's no substitute for actually
                          knowing the real rules".
                          
                          I mean, your approach requires memorizing for which
                          15 elements the closing tag can be omitted anyway
                          (otherwise you'll mentally parse the document wrong
                          (i.e. thinking a br tag needs to be closed is equally
                          likely as thinking p tags can be nested)).
                          
                          The risk that somebody might be expecting a closing
                          tag for an hr element seems minuscule and is a small
                          price to pay for conveniences such as (as I explained
                          above) being able to find and replace a p tag or a li
                          tag to a div tag.
       
                            recursive wrote 17 hours 36 min ago:
                            I don't believe there are any contexts where 
                            
                            *  is valid that  would also be valid.
                            
                            I'm not opposed to closing 
                            
                            *  tags as a general a general practice.  But I
                            don't think it provides as much benefit as you're
                            implying.  Valid HTML has a number of special rules
                            like this.  Like different content parsing rules
                            for  and <script>.  Like "foreign content".
                            
                            If you try to write lint-passing HTML in the hopes
                            that you could change 
                            
                            *  to <div> easily, you still have to contend with
                            the fact that such a change cannot be valid, except
                            possibly as a direct descendant of <template>.
       
                              haskellshill wrote 4 hours 21 min ago:
                              Again, you're focusing on a pointless detail.
                              Sure, I made a mistake in offhandedly using li as
                              an example. Why do you choose to ignore the
                              actually valid p example though? Seems like
                              you're more interested in demonstrating your
                              knowledge of HTML parsing (great job, proud of
                              ya) than anything else. Either way, you've given
                              zero examples of benefits of not doing things the
                              sensible way that most people would expect.
       
            Aransentin wrote 1 day ago:
            I agree for sure, but that's a problem with the spec, not the
            website. If there are multiple ways of doing something you might as
            well do the minimal one. The parser will have always to be able to
            handle all the edge cases no matter what anyway.
            
            You might want always consistently terminate all tags and such for
            aesthetic or human-centered (reduced cognitive load, easier
            scanning) reasons though, I'd accept that.
       
            ifwinterco wrote 1 day ago:
            You're not alone, this is called XHTML and it was tried but not
            enough people wanted to use it
       
              zelphirkalt wrote 1 day ago:
              Yeah, I remember, when I was at school and first learning HTML
              and this kind of stuff. When I stumbled upon XHTML, I right away
              adapted my approach to verify my page as valid XHTML. Guess I was
              always on this side of things. Maybe machine empathy? Or also
              human empathy, because someone needs to write those parsers and
              the logic to process this stuff.
       
              sevenseacat wrote 1 day ago:
              oh man, I wish XHTML had won the war. But so many people (and
              CMSes) were creating dodgy markup that simply rendered yellow
              screens of doom, that no-one wanted it :(
       
                adzm wrote 1 day ago:
                i'm glad it never caught on. the case sensitivity (especially
                for css), having to remember the xmlns namespace URI in the
                root element, CDATA sections for inline scripts, and insane
                ideas from companies about extending it further with more xml
                namespaced elements... it was madness.
       
                  haskellshill wrote 1 day ago:
                  It had too much unnecessary metadata yes, but case
                  insensitivity is always the wrong way to do stuff in
                  programming (e.g. case insensitive file system paths). The
                  only reason you'd want it is for real-world stuff like person
                  names and addresses etc. There's no reason you'd mix the case
                  of your CSS classes anyway, and if you want that, why not
                  also automatically match camelCase with snake_case with
                  kebab-case?
       
                  imiric wrote 1 day ago:
                  I'll copy what I wrote a few days ago:
                  
                  The fact XHTML didn't gain traction is a mistake we've been
                  paying off for decades.
                  
                  Browser engines could've been simpler; web development tools
                  could've been more robust and powerful much earlier; we would
                  be able to rely on XSLT and invent other ways of processing
                  and consuming web content; we would have proper XHTML
                  modules, instead of the half-baked Web Components we have
                  today. Etc.
                  
                  Instead, we got standards built on poorly specified
                  conventions, and we still have to rely on 3rd-party
                  frameworks to build anything beyond a toy web site.
                  
                  Stricter web documents wouldn't have fixed all our problems,
                  but they would have certainly made a big impact for the
                  better.
                  
                  And add:
                  
                  Yes, there were some initial usability quirks, but those
                  could've been ironed out over time. Trading the potential of
                  a strict markup standard for what we have today was a
                  colossal mistake.
       
                    recursive wrote 23 hours 56 min ago:
                    There's no way it could have gained traction.  Consider two
                    browsers.  One follows the spec explicitly, and one goes
                    into "best-effort" mode on encountering invalid markup. 
                    End users aren't going to care about the philosophical
                    reasoning for why Browser A doesn't show them their school
                    dance recital schedule.
                    
                    Consider JSON and CSV.    Both have formal specs.  But in the
                    wild, most parsers are more lenient than the spec.
       
                      WorldMaker wrote 21 hours 19 min ago:
                      Which is also largely what happened: HTML 5 is in some
                      ways that "best-effort" mode, standardized by a different
                      standards body to route around XHTML's philosophies.
       
                      ifwinterco wrote 22 hours 53 min ago:
                      Yeah this is it. We can debate what would be nicer
                      theoretically until the cows come home but there's a kind
                      of real world game theory that leads to browsers doing
                      their best to parse all kinds of slop as well as they
                      can, and then subsequently removing the incentive for
                      developers and tooling to produce byte perfect output
       
          chrismorgan wrote 1 day ago:
          ,  and    start and end tags are all optional. In practice, you
          shouldn’t omit the  start tag because of the lang attribute, but
          the others never need any attributes. (If you’re putting attributes
          or classes on the body element, consider whether the html element is
          more appropriate.) It’s a long time since I wrote , , ,  or .
       
        chrismorgan wrote 1 day ago:
        > 
        
        s/lange/lang/
        
        > 
        
        Don’t need the “.0”. In fact, the atrocious incomplete spec of
        this stuff < [1] > specifies using strtod to parse the number, which is
        locale dependent, so in theory on a locale that uses a different
        decimal separator (e.g. French), the “.0” will be ignored.
        
        I have yet to test whether  misbehaves (parsing as 1 instead of 1½)
        with LC_NUMERIC=fr_FR.UTF-8 on any user agents.
        
  HTML  [1]: https://www.w3.org/TR/css-viewport-1/
       
          cousin_it wrote 1 day ago:
          Wow. This reminds me of Google Sheets formulas, where function
          parameters are separated with , or ; depending on locale.
       
            fainpul wrote 1 day ago:
            Not sure if this still is the case, but Excel used to fail to open
            CSV files correctly if the locale used another list separator than
            ',' – for example ';'.
       
              eska wrote 1 day ago:
              I’m happy to report it still fails and causes me great pain.
       
                haskellshill wrote 1 day ago:
                Reall? Libreoffice at least has a File > Open menu that allows
                you to specify the separator and other CSV stuff, like the
                quote character
       
                  LtdJorge wrote 1 day ago:
                  You have to be inside Excel and use the data import tools.
                  You cannot double click to open, it outs everything in one
                  cell…
       
                    mrguyorama wrote 1 day ago:
                    Sometimes you double click and it opens everything just
                    fine and silently corrupts and changes and drops data
                    without warning or notification and gives you no way to
                    prevent it.
                    
                    The day I found that Intellij has a built in CSV tabular
                    editor and viewer was the best day.
       
                  fainpul wrote 1 day ago:
                  Excel has that too. But you can't just double-click a CSV
                  file to open it.
       
            WA wrote 1 day ago:
            Try Apple Numbers, where even function names are translated and you
            can’t copy & paste without an error if your locale is, say,
            German.
       
              neves wrote 1 day ago:
              aha, in Microsoft Excel they translate even the shortcuts. The
              Brazilian version Ctrl-s is "Underline" instead of "Save". Every
              sheet of mine ends with a lot of underlined cells :-)
       
            layer8 wrote 1 day ago:
            Given that world is about evenly split on the decimal separator [0]
            (and correspondingly on the thousands grouping separator), it’s
            hard to avoid. You could standardize on “;” as the argument
            separator, but “1,000” would still remain ambiguous.
            
            [0]
            
  HTML      [1]: https://en.wikipedia.org/wiki/Decimal_separator#Convention...
       
            simulo wrote 1 day ago:
            Oh, good to know that it depends on locale, I always wondered about
            that behavior!
       
            Moru wrote 1 day ago:
            Not to mention the functions are also translated to the other
            language. I think both these are the fault of Excel to be honest. I
            had this problem long before Google came around.
            
            And it's really irritating when you have the computer read
            something out to you that contains numbers. 53.1 km reads like you
            expect but 53,1 km becomes "fifty-three (long pause) one
            kilometer".
       
              cubefox wrote 1 day ago:
              > Not to mention the functions are also translated to the other
              language.
              
              This makes a lot of sense when you recognize that Excel formulas,
              unlike proper programming languages, aren't necessarily written
              by people with a sufficient grasp of the English language,
              especially when it comes to more abstract mathematical concepts,
              which aren't taught in secondary English language classes at
              school, but it in their native language mathematics classes.
       
            troupo wrote 1 day ago:
            The behaviour predates Google Sheets and likely comes from Excel
            (whose behavior Sheets emulate/reverse engineer in many places).
            And I wouldn't be surprised if Excel got it from Lotus.
       
            noja wrote 1 day ago:
            Same as Excel and LibreOffice surely?
       
              KoolKat23 wrote 1 day ago:
              Yes
       
        theandrewbailey wrote 1 day ago:
        I often reach for the HTML5 boilerplate for things like this:
        
  HTML  [1]: https://github.com/h5bp/html5-boilerplate/blob/main/dist/index...
       
          fud101 wrote 1 day ago:
          how do you find this when you need it?
       
            crabmusket wrote 19 hours 31 min ago:
            DuckDuckGo "html5 boilerplate", click on website, click on "source
            code", follow your nose to the index.html file
       
          xg15 wrote 1 day ago:
          There is some irony in then-Facebook's proprietary metadata lines
          being in there (the "og:..." lines). Now with their name being
          "Meta", it looks even more proprietary than before.
          
          Maybe the name was never about the Metaverse at all...
       
            zelphirkalt wrote 1 day ago:
            Are they proprietary? How? Isn't open graph a standard and widely
            implemented by many parties, including many open source softwares?
       
              chrismorgan wrote 1 hour 46 min ago:
              The Open Graph Protocol was thrown over the wall by Facebook and
              promptly abandoned. It’s an atrocious spec, a good chunk of
              what it did (and most of what people actually wanted from it) was
              already unnecessary (og:description, for example, is stupid), its
              data model makes several woefully bad choices, it’s firmly
              based around Facebook’s interests (especially pictures!),
              it’s built on standards that were already looking like they
              were failing and are definitely long abandoned now, and I don’t
              think I’ve seen a single page implementing OGP tags correctly
              in the last five years and I’m not confident that a strictly
              correct reader implementation even exists (the specific matter I
              have in mind pertains to the prefix attribute). Also a bunch of
              key URLs have been 404ing for many years now.
       
              FoxBJK wrote 1 day ago:
              They're not, at all. It was invented by Facebook, but it's
              literally just a few lines of metadata that applications can
              choose to read if they want.
       
                kaoD wrote 1 day ago:
                Being invented by $company does not preclude it from being a
                standard. [1] > A technical standard may be developed privately
                or unilaterally, for example by a corporation, regulatory body,
                military, etc.
                
                PDF is now an international standard (ISO 32000) but it was
                invented by Adobe. HTML was invented at the CERN and is now
                controlled by W3C (a private consortium). OpenGL was created by
                SGI and is maintained by the Khronos Group.
                
                All had different "ownership" paths and yet I'd say all of them
                are standards.
                
  HTML          [1]: https://en.wikipedia.org/wiki/Technical_standard
       
                  albedoa wrote 1 day ago:
                  Did you mean to type "does not" in that first sentence?
                  Otherwise, the rest of your comment acts as evidence against
                  it.
       
                    kaoD wrote 1 day ago:
                    Yep, it was a typo. Thanks! Fixed.
       
       
   DIR <- back to front page