URI: 
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   You already have a Git server
       
       
        shravani_05_01 wrote 18 hours 9 min ago:
        I have instagram id hack please
       
        JimmaDaRustla wrote 19 hours 48 min ago:
        Git is not a client/server architecture
       
        brendyn wrote 21 hours 26 min ago:
        So my friend and I were trying to learn some coding together. We had
        our laptops on the same wifi and I wanted us to use git without
        depending on GitHub, but I was completely stumped as to how to actually
        connect us together. I don't want us to setup SSH servers on each
        other's laptops, giving each other full access to our computers, and
        sending patches to each other across the living room seems overkill
        when we're just sitting around hacking away on our keyboards wanting to
        share what we've come up with every few minutes or so. I still have no
        idea how I would solve this. Maybe I'd just try coding with a syncthing
        directory shared between us, but then that's totally giving up on git
       
          xk3 wrote 17 hours 31 min ago:
          I think if you just setup SSH a certain way you can then use git or
          sftp for access:
          
              Match User gituser
                  ChrootDirectory /srv/git_chroot
                  ForceCommand internal-sftp
                  AllowTcpForwarding no
                  X11Forwarding no
                  PermitTTY no
          
          But tbh sending patches is fun and easy! After you force yourself to
          do it a few times you might even prefer it to push/pull
       
          OkayPhysicist wrote 20 hours 36 min ago:
          If you put a git repository in a shared drive via git clone --bare
          (makes it not checkout a branch, and doesn't setup any origin type
          stuff), you can use that as a remote, pushing/pulling etc. Setting up
          a network available shared drive on one of the two computers
          shouldn't be too difficult, and by git's distributed nature it's not
          exactly the end of the world if access temporary.
          
          Another option would be to use the email-based workflow, but that's
          quite different from most people's expected git experience.
       
          ekzy wrote 21 hours 13 min ago:
          Have you tried git daemon?
          
  HTML    [1]: https://git-scm.com/book/en/v2/Git-on-the-Server-Git-Daemon
       
        cncjchsue7 wrote 22 hours 52 min ago:
        git push prod
        
        Until you have more users than dollars that's all you need.
       
        sam_lowry_ wrote 23 hours 9 min ago:
        It's slightly more complicated to host your own git server on a generic
        linux box, because of file permissions.
        
        I wrote a HOWTO a few weeks ago:
        
  HTML  [1]: http://mikhailian.mova.org/node/305
       
        lukasnxyz wrote 1 day ago:
        I do this for blog posts, but for server side work I just rsync my dir,
        much easier
       
        isolay wrote 1 day ago:
        I wonder what's the missing link between this manual setup and
        something like gitea. Is gitea just automation of this?
       
        BatteryMountain wrote 1 day ago:
        In interviews, I've literally asked senior devops engineers and senior
        software engineers if they have hosted their own git servers and how to
        initialise one and not a single one has mentioned git init --bare.....
        which is disconcerting. They can deploy appliances (like gitlab, gitea)
        and build pipelines just fine, but none of them realized how git
        actually works underneath and how simple it all is.
       
          catmanjan wrote 1 day ago:
          Why is that disconcerting? There's simply too much software to be
          familiar with obscure features like this
       
            BatteryMountain wrote 20 hours 26 min ago:
            Because the same kind of guys have one global ssh key they use for
            all server & all environments... they don't realise they can (and
            should) have multiple keys on one machine / one user. Different
            keys for different purposes.
            
            Same issues with git: they don't realise they can have multiple
            configs, multiple remotes, etc. Never mind knowing how to sign
            commits.........
            
            They claim to be linux boffins but cannot initialise a git repo.
            This has nothing to do with elitism. This is basic stuff.
            
            What's next, they don't know what a bootloader or a partition is?
            Or run database engine with default settings? Or install a server
            OS and never bother to look at firewall config?
            
            I'm truly not trying to be cruel.
       
              catmanjan wrote 16 hours 36 min ago:
              Most of those things are ops related, not software development
       
            JackMorgan wrote 1 day ago:
            Because too many bad interviews are all about ensuring that the
            candidate knows the exact same 1% of CS/SWE knowledge as the
            interviewer.
            
            Don't worry, karma dictates when the interviewer goes looking
            they'll get rejected for not knowing some similarly esoteric graph
            theory equation or the internal workings of a NIC card.
            
            Too much of our interviewing is reading the interviewer's mind or
            already knowing the answer to a trick question.
            
            The field is way too vast for anyone to even know a majority, and
            realistically it's extremely difficult to assess if someone is an
            expert in a different 1%.
            
            Sometimes I feel like we need a system for just paying folks to see
            if they can do the job. Or an actually trusted credentialing system
            where folks can show what they've earned with badges and such.
            
            A better interview question about this subject doesn't assume they
            have it memorized, but if they can find the answer in a short time
            with the internet or get paralyzed and give up. It's a very
            important skill to be able to recognize you are missing information
            and researching it on the Internet.
            
            For example, one of my most talented engineers didn't really know
            that much about CS/SWE. However, he had some very talented buddies
            on a big discord server who could help him figure out anything. I
            kid you not, this kid with no degree and no experience other than
            making a small hobby video game would regularly tackle the most
            challenging projects we had. He'd just ask his buddies when he got
            stuck and they'd point him to the right blog posts and books. It
            was like he had a real life TRRPG Contacts stat. He was that hungry
            and smart enough to listen to his buddies, and then actually clever
            enough to learn on the job to figure it out. He got done more in a
            week than the next three engineers of his cohort combined (and this
            was before LLMs).
            
            So maybe what we should test isn't data stored in the brain but
            ability to solve a problem given internet access.
       
            Cthulhu_ wrote 1 day ago:
            In a sense it's anything but obscure, that is, it's one of the most
            basic of features of the tool and the first thing (well, git init
            anyway) anyone ever uses.
            
            But that's why people don't know about it, because they skip past
            the basics because in practice you never use it or need to know
            about it.
            
            This is the reality of software engineering and the like though -
            mostly you learn what you need to know, because learning everything
            is usually wasteful and never used, and there's a lot available.
            
            (I haven't been able to read documentation or a software book end
            to end in 20 years)
       
        AoifeMurphy wrote 1 day ago:
        Self hosting Git isn’t just geek freedom, it’s a mindset of
        redundancy. I’ve had repos vanish due to platform bans  that’s when
        it really hit me that “distributed” isn’t just a design, it’s a
        reminder of responsibility.
       
        mmaaz wrote 1 day ago:
        Some time ago, I was on a team of researchers collaborating with a
        hospital to build some ML models for them. I joined the project
        somewhat late. There was a big fuss over the fact that the hospital
        servers were not connected to the internet, so the researchers couldn't
        use GitHub, so they had been stalled for months. I told them that
        before GitHub there was `git`, and it is already on the servers... I
        "set up" a git system for them.
       
        m463 wrote 1 day ago:
        You can do it with just directories too.
        
          mkdir -p repo/project
          cd repo/project
          git init --bare
        
          cd ../..
          git clone repo/project
          cd project
          (do git stuff)
       
        TZubiri wrote 1 day ago:
        Hmm. I typically have:
        
        - a prod server ( and a test server) with a git repo.
        
        - a local machine with the git repo.
        
        - a git server where code officially lives, nowadays just a github.
        
        If I were to simplify and run my own git server of the third kind, I
        would probably not run a server for the sole purpose of hosting code,
        it would most likely run on the prod/test server.
        
        So essentially I would be eliminating one node and simplifying. I don't
        know, maybe there's merits to having an official place for code to be
        in. Even if just semantics.
        
        I know you can also use branches to have a "master" branch with code
        and then have migrations just be merging from master into a prod
        branch, but then again, I could have just master branches, but if it's
        on the test server then it's the test branch.
        
        I don't know if spendint time reinventing git workflows is a very
        efficient use of brain juice though.
       
        OhMeadhbh wrote 1 day ago:
        Yup. I do something similar: HTTPS://BI6.US/GI/B
       
        Froztnova wrote 1 day ago:
        Huh, y'know I shouldn't be surprised that this is possible given what I
        know about ssh and git. Yet I am.
       
        ctm92 wrote 1 day ago:
        you don't even need ssh, you can also just store the remote on a local
        or remote file system
       
        hk1337 wrote 1 day ago:
        You could just put a bare repo on a usb drive and pass it around but I
        wouldn’t recommend it.
       
        quasarj wrote 1 day ago:
        This is, in fact, what git was made for
       
        gwd wrote 1 day ago:
        Git post-update hooks to do deployment FTW.  I looked into the whole
        push-to-github to kick off CI and deployment; but someone mentioned the
        idea of pushing over ssh to a repo on your server and having a
        post-update hook do the deployment, and that turned out to be much
        simpler.
       
        whirlwin wrote 1 day ago:
        This is how Heroku has been doing it for years since it started out 10+
        years ago.
        
        $ git push heroku master
        
        So you'll have to make sure to push to e.g. GitHub as well for version
        control.
       
        cycomanic wrote 1 day ago:
        I suspect many who always use git remotely don't know that you can
        easily work with local repositories as well using the file protocol, 
        git clone file:///path/to/repository will create a clone of the
        repository at the path.
        
        I used that all the time when I had to move private repositories back
        and forth from work, without ssh access.
       
          lionkor wrote 1 day ago:
          you don't need to specify the protocol.
          
          For file protocol, just type a path.
          
          For ssh, type a ssh connection string like
          lion@myserver.test:path/to/file
       
        donatj wrote 1 day ago:
        Back when I started at my current job... 15 years ago... We had no
        central git server. No PR workflow. We just git pull-ed from each
        others machines. It worked better than you would expect.
        
        We then went to using a central bare repo on a shared server, to hosted
        gitlab(? I think - it was Ruby and broke constantly) eventually landing
        on GitHub
       
          globular-toast wrote 1 day ago:
          This is classic git usage. A "pull request" was literally just asking
          someone to pull from your branch. GitHub co-opted the term for their
          own thing.
          
          The thing that people really don't seem to get these days is how your
          master branch is a different branch from someone else's master
          branch. So pulling from one master to another was a normal thing.
          When you clone you get a copy of all the branches. You can commit to
          your master branch all you want, it's yours to use however you want
          to.
       
            kragen wrote 1 day ago:
            On GitHub, too, a "pull request" is literally just asking someone
            to pull from your branch.
       
              globular-toast wrote 1 day ago:
              Except your branch has to be on GitHub. The point is pulling used
              to happen across different repos a lot more.
       
              hshdhdhehd wrote 1 day ago:
              Isn't it asking someone to pull your branch (if forked) then
              merge it into to master
       
                kragen wrote 1 day ago:
                Git pull is git fetch plus git merge.
       
                  Sohcahtoa82 wrote 15 hours 53 min ago:
                  I've run into so many people that think "git pull" doesn't do
                  anything unless you run "git fetch" first.
                  
                  In my 10+ year career, I'm not sure I've ever run "git fetch"
                  manually.  I've never had a time where I wanted to fetch
                  upstream changes, but not merge (or rebase) them into my
                  branch.
       
                    kragen wrote 12 hours 59 min ago:
                    Yeah, I don't remember having done that either.  But I've
                    often had changes where the merge failed because of
                    un-checked-in changes, and then it was convenient to be
                    able to run `git merge` after checking in the changes
                    rather than `git pull` because it didn't need to talk to
                    the remote.
       
        iwontberude wrote 1 day ago:
        “By default, git won’t let you push to the branch that is currently
        checked out”
        
        TIL that people use non-bare git.
       
          mook wrote 1 day ago:
          I use non-bare git because it's just transferring code between
          multiple machines for cross-platform locally run software; that is,
          it's just checkouts on mac, Windows, and Linux and commits being
          pushed amongst them. But then I also use worktrees and stuff, so
          maybe I'm weird.
       
        simonw wrote 1 day ago:
        > It’s also backed up by default: If the server breaks, I’ve still
        got the copy on my laptop, and if my laptop breaks, I can download
        everything from the server.
        
        This is true, but I do also like having backups that are entirely
        decoupled from my own infrastructure. GitHub personal private accounts
        are free and I believe they store your data redundantly in more than
        one region.
        
        I imagine there's a way to setup a hook on your own server such that
        any pushes are then pushed to a GitHub copy without you having to do
        anything else yourself.
       
          nicce wrote 1 day ago:
          > I imagine there's a way to setup a hook on your own server such
          that any pushes are then pushed to a GitHub copy without you having
          to do anything else yourself.
          
          For most people, that would defeat the purpose of self-hosting.
       
            simonw wrote 1 day ago:
            If your purpose for self-hosting is fear that GitHub will ignore
            their own promises in their terms of service and abuse their
            custody of your data then sure. [1] (There's also the "to comply
            with our legal obligations" bit, which is a concern if you're doing
            things that governments around the world may have issue with.)
            
            I expect there are people who like to self-host so that they're not
            dependent on GitHub up-time or having their accounts banned and
            losing access to their data. For them, having a secondary GitHub
            backup should still be useful.
            
  HTML      [1]: https://docs.github.com/en/site-policy/github-terms/github...
       
              nicce wrote 1 day ago:
              > If your purpose for self-hosting is fear that GitHub will
              ignore their own promises in their terms of service and abuse
              their custody of your data then sure.
              
              None of that protects against the U.S. laws or current political
              climate. There is no guarantee for EU citizen that GitHub
              processes data only in EU area or that data ever leaves it.
              So it is all about the data privacy.
       
              lionkor wrote 1 day ago:
              That's not accurate. I've had one of my own accounts blocked,
              with zero access until I talked to their support and convinced
              them I was, in fact, a real person.
              
              At that moment, if I had any important private repos, they would
              be gone.
       
                simonw wrote 1 day ago:
                That's no reason not to use them as a third offsite backup on
                top of your laptop and your own server hosting, which is what
                I'm suggesting here.
       
        gijoeyguerra wrote 1 day ago:
        There’s also git daemon.
       
        tonymet wrote 1 day ago:
        My favorite git trick is using etckeeper & git subtree to manage
        multiple machines from a single repo .    A single git repo can “fan
        out” to dozens of instances .  It’s useful even for “managed”
        hosts with terraform because etckeeper snapshots config with every
        change, catching bugs in the terraform config.
        
        During the dev / compile / test flow, git makes a lightweight CI that
        reduces the exposure to your primary repo .  Just run `watch -n 60
        make` on the target and push using `git push` .  The target can run
        builds without having any access to your primary (GitHub) repo.
       
          dlisboa wrote 22 hours 37 min ago:
          Care to share a little bit more about this? I've been thinking of
          using git with `--work-tree=/` to track system files, so I'm
          interested in these "unusual" setups with git. Not sure I got the
          "fan out" concept here.
       
            tonymet wrote 20 hours 15 min ago:
            I call it “fan out” because it allows one repo to fan out to
            many deployments. The monorepo can be pushed to GitHub/gitlab and
            backed up regularly. It’s a lot easier to manage one big repo
            than dozens of tiny ones.  And it helps with dependencies across
            machines.  You can version control the entire network using the
            repo git commit.
       
            tonymet wrote 20 hours 23 min ago:
            `apt install etckeeper` will setup a repo in /etc/.git for you. It
            tracks any changes made via apt, and it runs nightly for other
            changes.
            
            On your primary repo, create dirs for each machine e.g.
            
              monorepo/
              ├─ Machine1/
              │  ├─ usr/share/www/
              │  ├─ etc/
              ├─ machine2/
              │  ├─ etc/
            
            Create remotes for each machine+repo e.g. `git remote add
            hosts/machine1/etc ssh://machine1/etc`    then `git fetch
            hosts/machine1/etc`
            
            Then add the subtree with `git subtree add -P machine1/etc
            hosts/machine1/etc master`
            
            When you want to pull changes you can `git subtree pull` or `git
            subtree push …`
            
            If you end up making changes in your monorepo, use push. If you
            make changes directly on the machine (or via terraform), pull
            
            This way you can edit and manage dozens of machines using a single
            git repo, and git subtree push to deploy the changes.  No deploy
            scripts.
       
              dlisboa wrote 58 min ago:
              Thanks, this sounds interesting. I wasn’t aware of etckeeper.
              So I take it you do periodic git pulls to update files and keep
              it all in check.
              
              The idea of a folder per machine is very good.
       
        kragen wrote 1 day ago:
        You probably want to use a bare repository (git init --bare) rather
        than `git config receive.denyCurrentBranch updateInstead`, which will
        cause your pushes to fail if you edit anything locally in the checkout.
         For [1] I run a pull from the post-update hook of [2] , [2]
        /hooks/post-update, which is short enough that I'll just include it
        here:
        
            #!/bin/sh
            set -e
        
            echo -n 'updating... '
            git update-server-info
            echo 'done. going to dev3'
            cd /home/kragen/public_html/sw/dev3
            echo -n 'pulling... '
            env -u GIT_DIR git pull
            echo -n 'updating... '
            env -u GIT_DIR git update-server-info
            echo 'done.'
        
        You can of course also run a local site generator here as well,
        although for dev3 I took a lighter-weight approach — I just checked
        in the HEADER.html file that Apache FancyIndexing defaults to including
        above the file directory listing and tweaked some content-types in the
        .htaccess file.
        
        This could still fail to update the checkout if it has local changes,
        but only if they create a merge conflict, and it won't fail to update
        the bare repo, which is probably what your other checkouts are cloned
        from and therefore where they'll be pulling from.
        
  HTML  [1]: http://canonical.org/~kragen/sw/dev3/
  HTML  [2]: http://canonical.org/~kragen/sw/dev3.git
  HTML  [3]: http://canonical.org/~kragen/sw/dev3.git/hooks/post-update
       
          PantaloonFlames wrote 1 day ago:
          regarding that post-update script, can you explain?
          
          I would think you'd want to
          
                cd /home/kragen/public_html/sw/dev3
                git update-server-info
                git pull
          
          ..in that order.
          
          And I wouldn't think you'd need to run git update-server-info again,
          after git pull.  My understanding isthe update-server-info makes
          updates to info/refs , which is necessary _after a push_.
          
          What am I missing?
       
            kragen wrote 1 day ago:
            The first update-server-info is running in the bare repo, which is
            the place I just pushed to, which is where sw/dev3 is going to pull
            from.
            
            I'm not sure the second update-server-info is necessary.
            
            If you're asking about the env -u, that's because Git sets that
            variable so that commands know which repo they're in even if you cd
            somewhere else, which is exactly what I don't want.
       
          kragen wrote 1 day ago:
          Also bare repositories are a useful thing to put on USB pendrives.
       
            3eb7988a1663 wrote 1 day ago:
            For a USB drive, I would be more likely to use a bundle. Intended
            for offline distribution of a repository. Plus it is a single file,
            so you do not have to pay the transfer overhead of many small
            files.
       
              kragen wrote 1 day ago:
              You can repack your repo before you clone it onto the pendrive,
              and once it's there you can push and pull to it many times. 
              Granted, pendrives are fast enough these days that copying an
              entire bundle every time is probably fine.
       
        patal wrote 1 day ago:
        How would I sync access, if more than one people ssh-pushes onto the
        git repo? I assume syncing be necessary.
       
          tasuki wrote 1 day ago:
          Same as always, with any other remote?
          
          (Use `git pull`? If the different people push to different branches,
          then there's no conflict and no problem. If you try to push different
          things into the same branch, the second person will get told their
          branch is out of date. They can either rebase or - if this is allowed
          by the repo config - force push over the previous changes...)
       
            patal wrote 1 day ago:
            Sure, if they push one after the other. If they push at the same
            time however, does Git handle the sync on its own?
       
              lionkor wrote 1 day ago:
              yeah
       
                patal wrote 1 day ago:
                nice, thx
       
        ziofill wrote 1 day ago:
        What is it with code on blog posts where fonts have uneven size and/or
        font types (e.g. italics mixed in with regular)? I see this from time
        to time and I wonder if it’s intentional.
       
          jcgl wrote 1 day ago:
          Found a solution to this recently (this is not my blog): [1] CSS
          needs some tweaking for iOS Safari, I guess.
          
  HTML    [1]: https://nathan.rs/posts/fixing-ios-codeblocks/
       
          stabbles wrote 1 day ago:
          Seen this a lot as well with Safari on the iPhone.
       
        jrm4 wrote 1 day ago:
        Cannot emphasize this whole notion enough; Very roughly, Github is to
        git what gmail is to email.
        
        It's mostly probably fine if that's the thing most of everybody wants
        to use and it works well; but also it's very unwise to forget that the
        point was NEVER to have a deeply centralized thing -- and that idea is
        BUILT into the very structure of all of it.
       
          unethical_ban wrote 1 day ago:
          I'd say GitHub is to git what Facebook is to email.
          
          GitHub's value is in network effects and features like big and issue
          tracking.
          
          At least Gmail is still an email client that communicates with other
          systems.
       
          m3047 wrote 1 day ago:
          DNS is in the same predicament. :-p
       
          pixelmonkey wrote 1 day ago:
          This reminded me of another discussion on HN a few months ago.
          Wherein I was reflecting on how the entire culture of internet
          standards has changed over time:
          
          "In the 80s and 90s (and before), it was mainly academics working in
          the public interest, and hobbyist hackers. Think Tim Berners-Lee,
          Vint Cerf, IETF for web/internet standards, or Dave Winer with RSS.
          In the 00s onward, it was well-funded corporations and the engineers
          who worked for them. Think Google. So from the IETF, you have the
          email protocol standards, with the assumption everyone will run their
          own servers. But from Google, you get Gmail.
          
          [The web] created a whole new mechanism for user comfort with
          proprietary fully-hosted software, e.g. Google Docs. This also
          sidelined many of the efforts to keep user-facing software open
          source. Such that even among the users who would be most receptive to
          a push for open protocols and open source software, you have strange
          compromises like GitHub: a platform that is built atop an open source
          piece of desktop software (git) and an open source storage format
          meant to be decentralized (git repo), but which is nonetheless 100%
          proprietary and centralized (e.g. GitHub.com repo hosting and GitHub
          Issues)."
          From:
          
  HTML    [1]: https://news.ycombinator.com/item?id=42760298
       
            jrm4 wrote 1 day ago:
            Yes. And honestly though, this is the sort of thing that makes me
            generally proclaim that Free Software and Open Source won.
            
            It was extremely unlikely that it would be some kind of free
            utopia; but also, it's extremely remarkable what we've been able to
            keep generally free, or at least with a free-enough option.
       
          morshu9001 wrote 1 day ago:
          Idk if git was designed to not be used in a centralized way. Like all
          other common CLIs, it was simply designed to work on your PC or
          server without caring who you are, and nothing stopped a corp from
          turning it into a product. Torvalds made git and Linux, then he put
          Linux on Github.
       
            kragen wrote 1 day ago:
            The reason Linus wrote Git was specifically because he was
            unwilling to accept the centralization of the existing popular
            source-control systems like CVS and SVN, and Linux's license to the
            unpopular proprietary decentralized source control system it was
            using got revoked because Larry McVoy threw a tantrum.    Linus
            needed an open-source alternative, so he tried the unpopular
            open-source source-control systems like Monotone, but he felt they
            suffered from both featuritis and inadequate performance, so he
            wrote a "stupid content tracker" called Git.
       
            semanticc wrote 1 day ago:
            Linux is not developed in GitHub.
       
              morshu9001 wrote 1 day ago:
              I know it's only a mirror, but it looks like an approval
       
            jrm4 wrote 1 day ago:
            It's not a "CLI" and yes, "decentralized" was literally one of the
            points of it.
       
              morshu9001 wrote 1 day ago:
              "Distributed" was the point and the language their site uses*,
              not decentralized. It's only described as a convenience and
              reliability thing in contrast to the mess known as CVS. I haven't
              seen a note about avoiding one entity having too much power, even
              if that's a goal some users have in mind. Normally you have one
              master repo, or "blessed" as the site calls it.
              
              It's like, a Redis cluster is distributed but not decentralized.
              The ssh protocol is not decentralized. XMPP, Matrix, and Bitcoin
              are decentralized protocols, first two via federation.
              
              *
              
  HTML        [1]: https://git-scm.com/about/distributed
       
            ashton314 wrote 1 day ago:
            Git was explicitly designed to be decentralized.
       
        prmph wrote 1 day ago:
        The more I use git, the more I discover more depth to it.
        
        So many features and concepts; it's easy to think you understand the
        basics, but you need to dig deep into it's origin and rationale to
        begin to grasp the way of thinking it is built around.
        
        And the API surface area is much larger than one would think, like an
        iceberg
        
        So I find it really weirdly low level in a way. Probably what is needed
        is is a higher-level CLI to use it in the most sensible, default way,
        because certainly the mental model most people use it with is
        inadequate.
       
        mberning wrote 1 day ago:
        I do something similar. I create a bare repo on my dropbox folder or
        nas mount. Then checkout from bare repo file path to some place where I
        will be doing all the work.
       
        binary132 wrote 1 day ago:
        This is definitely nice but it doesn’t really support the full range
        of features of Git, because for example submodules cannot be local
        references.  It’s really just easier to set up gitolite and use that
        in almost the same exact way but it’s much better.
       
        tangotaylor wrote 1 day ago:
        Beware of using this to publish static sites: you can accidentally
        expose your .git directory to the public internet.
        
        I got pwned this way before (by a pentester fortunately). I had to
        configure Apache to block the .git directory.
       
          kragen wrote 1 day ago:
          I expose my .git directory to the public internet on purpose.  If I
          don't, how will anyone else clone the repo?
       
          CGamesPlay wrote 1 day ago:
          But what, exactly, was pwned? Did you have secrets in the git repo?
       
            tangotaylor wrote 1 day ago:
            No secrets like auth credentials or tokens but:
            
            - Deleted files and development artifacts that were never meant to
            go public.
            
            - My name and email address.
            
            - Cringy commit messages.
            
            I assumed these commits and their metadata would be private.
            
            It was embarrassing. I was in high school, I was a noob.
       
            tasuki wrote 1 day ago:
            I expose the .git directories on my web server and never considered
            it a problem. I also expose them on GitHub and didn't consider that
            a problem either...
       
              giobox wrote 1 day ago:
              Super common failure is accidental commit of a secret key. People
              suck at actually deleting something from git history. Had one
              colleague leak a Digital Ocean key this way with an accidental
              env file commit. He reverted, but the key is of course still
              present in the project history.
              
              The speed at which an accidentally committed and reverted key is
              compromised and used to say launch a fleet of stolen VPSes on a
              github public repo nowadays is incredible. Fortunately most of
              the time your provider will cancel the charges...
              
              This has always been the roughest part of git for me, the process
              to remove accidentally committed sensitive content. Sure we
              should all strive not to commit stupid things in the first place,
              and of course we have tools like gitignore, but we are all only
              human.
              
              > [1] "Sensitive data can be removed from the history of a
              repository if you can carefully coordinate with everyone who has
              cloned it and you are willing to manage the side effects."
              
  HTML        [1]: https://docs.github.com/en/authentication/keeping-your-a...
       
          jonhohle wrote 1 day ago:
          Instead of excluding non-public directories, I like to make an
          explicit `public` directory (or `doc`, `doc-root`, whatever you want
          to call it). Then configure your server to point to that subdirectory
          and don’t worry about the repo.
          
          I usually throw `etc` and `log` directories at the top level as well
          and out my server config in etc, and have a gitignite rule to ignore
          everything in logs, but it’s there and ready for painless
          deployment.
          
          Since the web root is already a sub directory, more sensitive things
          can go into the same repo without worrying about exposing them.
       
            cesnja wrote 1 day ago:
            You can still get hit by a path traversal exploit. The safest
            option is to only have the public files on the server.
       
              jonhohle wrote 23 hours 4 min ago:
              A path traversal is different from putting private files in a
              public directory. For a simple static site there will always be
              certs, /etc, and other things outside of the document root that
              shouldn’t be served.
       
            wizzwizz4 wrote 1 day ago:
            Storing volatile data (e.g. logs) in the git-managed directory is
            an excellent way to lose all your data.
            
  HTML      [1]: https://fediverse.blog/~/Prismo/on-prismo-data-loss
       
        jonhohle wrote 1 day ago:
        I feel like something was lost along the way.
        
            git init —-bare
        
        will give you a git repo without a working set (just the contents
        typically in the .git directory). This allows you to create things like
        `foo.git` instead of `foo/.git`.
        
        “origin” is also just the default name for the cloned remote. It
        could be called anything, and you can have as many remotes as you’d
        like. You can even namespace where you push back to the same remotes by
        changing fetch and push paths. At one company it was common to push
        back to `$user/$feature` to avoid polluting the root namespace with
        personal branches. It was also common to have `backup/$user` for
        pushing having a backup of an entire local repo.
        
        I often add a hostname namespace when I’m working from multiple hosts
        and then push between them directly to another instead of going back to
        a central server.
        
        For a small static site repo that has documents and server config, I
        have a remote like:
        
            [remote “my-server”]
            url = ssh+git://…/deploy/path.git
            fetch = +refs/heads/*:refs/remotes/my-server
            push = +refs/heads/*:refs/remotes/my-laptop
        
        So I can push from my computer directly to that server, but those
        branches won’t overwrite the server’s branches. It acts like a
        reverse `git pull`, which can be useful for firewalls and other
        situations where my laptop wouldn’t be routable.
       
          lloeki wrote 1 day ago:
          > “origin” is also just the default name for the cloned remote
          
          I don't have a central dotfiles repo anymore (that I would always to
          forget to push to); I have SSH access to my devices - via tailscale -
          anyway so I'm doing
          
              git remote add $hostname $hostname:.config
          
          and can cd ~/.config && git fetch/pull/rebase $hostname anytime from
          anywhere.
          
          I've been considering a bare repo + setting $GITDIR (e.g via direnv)
          but somehow the dead simple simplicity has trumped the lack of push
          ability
       
            xk3 wrote 19 hours 52 min ago:
            What's the benefit of this compared to rsync or scp
            $hostname:.config/?
            
            I put my whole home folder in git and that has its benefits (being
            able to see changes to files as they happen) but if I'm just
            copying a file or two of config I'll just cat or scp
            it--introducing git seems needlessly complex if the branches are
            divergent
       
              lloeki wrote 4 hours 39 min ago:
              > just a file or two
              
              I don't have to remember which to copy
              
              > rsync or scp
              
              I don't have to remember which is most recent, nir even assume
              that "most recent" is a thing (i.e nonlinear)
              
              It's all just:
              
              - a git fetch --all away to get
              
              - a git log --oneline --decorate --graph --all to find out who's
              where and when
              
              - diff and whatchanged for contents if needed
              
              - a cherry-pick / rebase away to get what I want, complete with
              automatic conflict resolution
              
              I can also have local experiments in local topic branches, things
              I want to try out but not necessarily commit to across all of my
              machines yet.
       
          imron wrote 1 day ago:
          GitHub is full of git anti-patterns
       
          eru wrote 1 day ago:
          Yes, I encourage my co-workers, when pushing to a common repo, to use
          `$user/$whatever` exactly to have their own namespace.    The main
          selling point I'm making is that it makes cleanup of old branches
          easier, and less conflict-prone.
          
          Tangentially related: when you have multiple local checkouts, often
          `git worktree` is more convenient than having completely independent
          local repository.  See
          
  HTML    [1]: https://git-scm.com/docs/git-worktree
       
          ompogUe wrote 1 day ago:
          I often init a bare repo on single-use server's I'm working on.
          
          Then, have separate prod and staging clones parallel to that.
          
          Have a post-commit hook set on the bare repo that automatically
          pushes updates to the staging repo for testing.
          
          When ready, then pull the updates into prod.
          
          Might sound strange, but for certain clients hosting situations, I've
          found it allows for faster iterations. ymmv
       
          mzajc wrote 1 day ago:
          > “origin” is also just the default name for the cloned remote.
          It could be called anything, and you can have as many remotes as
          you’d like.
          
          One remote can also hold more URLs! This is arguably more obscure
          (Eclipse's EGit doesn't even support it), but works wonders for my
          workflow, since I want to push to multiple mirrors at the same time.
       
            mckn1ght wrote 1 day ago:
            Whenever I fork a repo I rename origin to “fork” and then add
            the parent repo as a remote named “upstream” so i can pull from
            that, rebase any of my own changes in to, and push to fork as
            needed.
            
            Multiple remotes is also how you can combine multiple repos into
            one monorepo by just fetching and pulling from each one, maybe into
            different subdirectories to avoid path collisions.
       
              svieira wrote 23 hours 15 min ago:
              This sounds like submodules, but I'm guessing it's completely
              orthogonal ... multiple distinct remotes for the same
              _repository_, all of them checked out to different sub-paths;
              does that result in all the remotes winding up with all the
              commits for all the shared repositories when you push, or can you
              "subset" the changes as well?
       
                mckn1ght wrote 15 hours 30 min ago:
                Yeah, it's different, I was thinking about a time I needed to
                combine to separate repos into one. To do that, you clone one
                of them, then add a remote to the other one, fetch that, and
                `pull --rebase` or similar, and you'll replay all of the
                first's commits on top of the second's. I can't remember what I
                was thinking about the subdirectories, I guess they'd already
                have to be organized that way in the various repos to avoid
                conflicts or smushing separate sources together.
       
          kawsper wrote 1 day ago:
          I always thought it would have been better, and less confusing for
          newcomers, if GitHub had named the default remote “github”,
          instead of origin, in the examples.
       
            mckn1ght wrote 1 day ago:
            Is this something the remote can control? I figured it was on the
            local cloner to decide.
            
            Can’t test it now but wonder if this is changed if it affects the
            remote name for fresh clones:
            
  HTML      [1]: https://git-scm.com/docs/git-config#Documentation/git-conf...
       
            pwdisswordfishy wrote 1 day ago:
            GitHub could not name it so, because it's not up to GitHub to
            choose.
       
              seba_dos1 wrote 1 day ago:
              There are places where it does choose, but arguably it makes
              sense for it to be consistent with what you get when using "git
              clone".
       
            tobylane wrote 1 day ago:
            If I clone my fork, I always add the upstream remote straight away.
            Origin and Upstream could each be github, ambiguous.
       
            masklinn wrote 1 day ago:
            How is it less confusing when your fork is also on github?
       
              matrss wrote 1 day ago:
              Requiring a fork to open pull requests as an outsider to a
              project is in itself a idiosyncrasy of GitHub that could be done
              without. Gitea and Forgejo for example support AGit: [1] .
              
              Nevertheless, to avoid ambiguity I usually name my personal forks
              on GitHub gh-.
              
  HTML        [1]: https://forgejo.org/docs/latest/user/agit-support/
       
                kragen wrote 1 day ago:
                No, it's a normal feature of Git.  If I want you to pull my
                changes, I need to host those changes somewhere that you can
                access.  If you and I are both just using ssh access to our
                separate Apache servers, for example, I am going to have to
                push my changes to a fork on my server before you can pull
                them.
                
                And of course in Git every clone is a fork.
                
                AGit seems to be a new alternative where apparently you can
                push a new branch to someone else's repository that you don't
                normally have access to, but that's never guaranteed to be
                possible, and is certainly very idiosyncratic.
       
                  bregma wrote 1 day ago:
                  > in Git every clone is a fork
                  
                  That's backwards. In Github every fork is just a git clone.
                  Before GitHub commandeered the term "fork' was already in
                  common use and it had a completely different meaning.
       
                    kragen wrote 1 day ago:
                    As I remember it, it was already in common use with exactly
                    the same denotation; they just removed the derogatory
                    connotation.
       
                  matrss wrote 1 day ago:
                  Arguably the OG workflow to submit your code is `git
                  send-email`, and that also doesn't require an additional
                  third clone on the same hosting platform as the target
                  repository.
                  
                  All those workflows are just as valid as the others, I was
                  just pointing out that the way github does it is not the only
                  way it can be done.
       
                    kragen wrote 1 day ago:
                    Yes, that's true.  Or git format-patch.
       
                masklinn wrote 1 day ago:
                > Requiring a fork to open pull requests as an outsider to a
                project is in itself a idiosyncrasy of GitHub that could be
                done without. Gitea and Forgejo for example support AGit: [1] .
                
                Ah yes, I'm sure the remote being called "origin" is what
                confuses people when they have to push to a refspec with push
                options. That's so much more straightforward than a button
                "create pull request".
                
  HTML          [1]: https://forgejo.org/docs/latest/user/agit-support/
       
                  ratmice wrote 1 day ago:
                  As far as I'm concerned the problem isn't that one is easier
                  than the other.
                  It's that in the github case it completely routes around the
                  git client.
                  With AGit+gitea or forgejo you can either click your "create
                  pull request" button,
                  or make a pull request right from the git client. One is
                  necessarily going to require more information than the other
                  to reach the destination...
                  
                  It's like arguing that instead of having salad or fries on
                  the menu with your entree they should only serve fries.
       
              grimgrin wrote 1 day ago:
              agreed, you'd need a second name anyway. and probably "origin"
              and "upstream" is nicer than "github" and "my-fork" because.. the
              convention seems like it should apply to all the other git hosts
              too: codeberg, sourcehut, tfs, etc
       
                nicoburns wrote 1 day ago:
                Huh. Everyone seems to use "origin" and "upstream". I've been
                using "origin" and "fork" the whole time.
       
                  masklinn wrote 1 day ago:
                  I use "mine" for my fork.
       
          webstrand wrote 1 day ago:
          git clone --mirror  
          
          is another good one to know, it also makes a bare repository that is
          an exact clone (including all branches, tags, notes, etc) of a remote
          repo. Unlike a normal clone that is set up for local tracking
          branches of the remote.
          
          It doesn't include pull requests, when cloning from github, though.
       
            BatteryMountain wrote 1 day ago:
            I've used this method to make a backup of our 80+ repo's at a
            previous company, just grab the url's from the api & run the git
            clone in a for loop. Works great!
       
            Cheer2171 wrote 1 day ago:
            > It doesn't include pull requests, when cloning from github,
            though.
            
            Because GitHub pull requests are a proprietary, centralized,
            cloud-dependent reimplementation of `git request-pull`.
            
            How the "free software" world slid head first into a proprietary
            cloud-based "open source" world still boils my blood. Congrats,
            Microsoft loves and owns it all, isn't that what what we always
            wanted?
       
              floydnoel wrote 21 hours 49 min ago:
              wouldn't it be cool to have an open source (maybe even
              p2p/federated) version of GitHub?
       
              cwbriscoe wrote 1 day ago:
              Wasn't that how it worked before Microsoft bought Github?
       
              udev4096 wrote 1 day ago:
              It still blows my mind how git has lost it's original ideas of
              decentralized development because of github and how github, a
              for-profit - centralized - close-sourced forge, became the center
              for lots of important open source projects. We need radicle,
              forgejo, gitea to catch up even more!
       
                com2kid wrote 1 day ago:
                Git brings different things to different people.
                
                For me the largest advantage of Git was being able to easily
                switch branches. Previously I'd have to have multiple copies of
                an entire source repo on my machine if I wanted to work on
                multiple things at the same time. Likewise a patch set going
                through CR meant an entire folder on my machine was frozen
                until I got feed back.
                
                Not having to email complex patches was another huge plus. I
                was at Microsoft at the time and they had home made scripts
                (probably Perl or VBS, but I forget what) that applied patches
                to a repo.
                
                It sucked.
                
                Git branch alone was worth the cost of changing over.
       
                viraptor wrote 1 day ago:
                It didn't really lose the original ideas. It just never learned
                that people don't want to use it the way kernel devs want to
                use it. Git never provided an easy github-like experience, so
                GitHub took over. Turns out devs in general are not into the
                "setup completely independent public mailing lists for
                projects" idea.
       
                  palata wrote 1 day ago:
                  > Turns out devs in general are not into the "setup
                  completely independent public mailing lists for projects"
                  idea.
                  
                  My feeling is that devs in general are not into the "learning
                  how to use tools" idea.
                  
                  They don't want to learn the git basics, they don't want to
                  learn the cmake basics, ...
                  
                  I mean that as an observation more than a criticism. But to
                  me, the fact that git was designed for those who want to
                  learn powerful tools is a feature. Those who don't can use
                  Microsoft. It all works in the end.
                  
                  Fun fact: if I want to open source my code but not get
                  contributions (or rather feature requests by people who
                  probably won't ever contribute), I put my git repo on
                  anything that is not GitHub. It feels like most professional
                  devs don't know how to handle anything that is not on GitHub
                  :-). Bonus point for SourceHut: if someone manages to send a
                  proper patch on the mailing list, it usually means that they
                  know what they are doing.
       
                    viraptor wrote 1 day ago:
                    > My feeling is that devs in general are not into the
                    "learning how to use tools" idea.
                    
                    Given the number of vim, emacs, nix, git, i3, etc. users
                    who are proud of it and all the customisations they do, I
                    don't think so. Like, there will be a decent group, but not
                    generalisable to "devs".
       
                    jychang wrote 1 day ago:
                    > My feeling is that devs in general are not into the
                    "learning how to use tools" idea
                    
                    Well, the devs learnt how to use Github, didn't they? Seems
                    like people CAN learn things that are useful. I can also
                    make the argument that Github pull requests are actually
                    more powerful than git request-pull in addition to having a
                    nicer UI/UX.
                    
                    Being upset that people aren't using git request-pull is
                    like the creator of Brainfuck being upset that scientists
                    aren't using Brainfuck instead of something more powerful
                    and has a better UI/UX like Python. It's kinda obvious
                    which one is better to use...
       
                      palata wrote 1 day ago:
                      > Seems like people CAN learn things that are useful
                      
                      I didn't say they could not.
       
                lisbbb wrote 1 day ago:
                I once worked at a smaller company that didn't want to shell
                out for github and we just hosted repos on some VM and used the
                ssh method.  It worked.  I just found it to be kind of clunky
                having come from a bigger place that was doing enterprise
                source control management with Perforce of all things.    Github
                as a product was fairly new back then, but everyone was trying
                to switch over to Git for resume reasons there.  So then I go
                to this smaller place using git in the classic manner.
       
                afavour wrote 1 day ago:
                I don’t think it’s really that surprising. git didn’t
                become popular because it was decentralised, it just happened
                to be. So it stands to reason that part doesn’t get
                emphasised a ton.
       
                  int_19h wrote 1 day ago:
                  It did become popular because it was decentralized, but the
                  specific features that this enabled were less about not
                  depending on a central server, and more about being able to
                  work with the same repo locally with ease without having to
                  be online for most operations (as was the case with
                  Subversion etc). Git lets me have a complete local copy of
                  the source with all the history, branches etc in it, so
                  anything that doesn't require looking at the issues can be
                  done offline if you did a sync recently.
                  
                  The other big point was local branches. Before DVCS, the
                  concept of a "local branch" was generally not a thing. But
                  now you could suddenly create a branch for each separate
                  issue and easily switch between them while isolating
                  unrelated changes.
       
                seunosewa wrote 1 day ago:
                Once they killed mercurial on bitbucket, it was over.
       
              paulddraper wrote 1 day ago:
              Having a web interface was really appreciated by users, it would
              seem.
       
                yogishbaliga wrote 1 day ago:
                It is also other features such as GitHub workflow, releases,
                integration with other tools, webhooks etc. that makes it
                useful.
       
                  paulddraper wrote 1 day ago:
                  That’s true, but GitHub was dominant prior to having most
                  of those features.
       
                jasode wrote 1 day ago:
                >Having a web interface
                
                It's not the interface, it's the web hosting.  People want a
                free destination server that's up 24/7 to store their
                repository.
                
                If it was only the web interface, people could locally install
                GitLab or Gitea to get a web browser UI. (Or use whatever
                modern IDE code editor to have a GUI instead of a CLI for git
                commands.)  But doing that still doesn't solve what GitHub
                solves:  a public server to host the files, issue tracking,
                etc.
                
                Before git & Github, people put source code for public access
                on SourceForge and CodeProject.  The reason was the same:  a
                zero-cost way to share code with everybody.
       
                  mr_toad wrote 1 day ago:
                  GitHub and the others also handle user authentication and
                  issue tracking, which aren’t part of Git itself.
       
                  pmontra wrote 1 day ago:
                  It was both.
                  
                  A 24/7 repository and a 24/7 web URL for the code. Those two
                  features together let devs inspect and download code, and
                  open and discuss issues.
                  
                  The URL also let automated tools download and install
                  packages.
                  
                  Familiar UI, network effects made the rest.
       
                    JoshTriplett wrote 1 day ago:
                    Exactly. The UI needs to live wherever the canonical home
                    for the project is, at least until we have a federated
                    solution.
                    
                    I'm really looking forward to federated forges.
       
                  chipsrafferty wrote 1 day ago:
                  No, actually it's the interface.  Many companies would
                  totally host it themselves, but the interface is what gives
                  GH value.
       
                    tweetle_beetle wrote 1 day ago:
                    GitLab is around a decade old, is a solid enterprise
                    product and has always had a very similar interface to
                    GitHub, at times even drawing criticism for being too
                    similar. There's more to it than that.
       
                  udev4096 wrote 1 day ago:
                  GH is essentially an unlimited storage space. There are
                  countless scripts which makes it possible to even use it as
                  an unlimited mounted storage
       
                    FpUser wrote 1 day ago:
                    And then one day orange gets pissed of at yet another
                    country and it (the repo) is gone
       
                      wiz21c wrote 1 day ago:
                      orange ?
       
                        kevin_thibedeau wrote 23 hours 35 min ago:
                        The convicted felon.
       
              velcrovan wrote 1 day ago:
              When this kind of “sliding” happens it’s usually because
              the base implementation was missing functionality. Turns out CLI
              interfaces by themselves are (from a usability perspective)
              incomplete for the kind of collaboration git was designed to
              facilitate.
       
                derefr wrote 1 day ago:
                > Turns out CLI interfaces by themselves are (from a usability
                perspective) incomplete for the kind of collaboration git was
                designed to facilitate.
                
                git was designed to facilitate the collaboration scheme of the
                Linux Kernel Mailing List, which is, as you might guess... a
                mailing list.
                
                Rather than a pull-request (which tries to repurpose git's
                branching infrastructure to support collaboration), the
                intended unit of in-the-large contribution / collaboration in
                git is supposed to be the patch.
                
                The patch contribution workflow is entirely CLI-based... if you
                use a CLI mail client (like Linus Torvalds did at the time git
                was designed.)
                
                The core "technology" of this is, on the contributor side:
                
                1. "trailer" fields on commits (for things like `Fixes`,
                `Link`, `Reported-By`, etc)
                
                2. `git format-patch`, with flags like `--cover-letter` (this
                is where the thing you'd think of as the "PR description"
                goes), `--reroll-count`, etc.
                
                3. a codebase-specific script like Linux's
                `./scripts/get_maintainer.pl`, to parse out (from
                source-file-embedded headers) the set of people to notify
                explicitly about the patch — this is analogous to a PR's
                concept of "Assignees" + "Reviewers"
                
                4. `git send-email`, feeding in the patch-series generated in
                step 2, and targeting the recipients list from step 3. (This
                sends out a separate email for each patch in the series, but in
                such a way that the messages get threaded to appear as a single
                conversation thread in modern email clients.)
                
                And on the maintainer side:
                
                5. `s ~/patches/patch-foo.mbox` (i.e. a command in a CLI email
                client like mutt(1), in the context of the patch-series thread,
                to save the thread to an .mbox file)
                
                6. `git am -3 --scissors ~/patches/patch-foo.mbox` to split the
                patch-series mbox file back into individual patches, convert
                them back into an annotated commit-series, and build that into
                a topic branch for testing and merging.
                
                Subsystem maintainers, meanwhile, didn't use patches to get
                topic branches "upstream" [= in Linus's git repo]. Linus just
                had the subsystem maintainers as git-remotes, and then, when
                nudged, fetched their integration branches, reviewed them, and
                merged them, with any communication about this occurring
                informally out-of-band. In other words, the patch flow was for
                low-trust collaboration, while direct fetch was for high-trust
                collaboration.
                
                Interestingly, in the LKML context, `git request-pull` is
                simply a formalization of the high-trust collaboration workflow
                (specifically, the out-of-band "hey, fetch my branches and
                review them" nudge email). It's not used for contribution, only
                integration; and it doesn't really do anything you can't do
                with an email — its only real advantages are in keeping the
                history of those requests within the repo itself, and for
                forcing requests to be specified in terms of exact git refs to
                prevent any confusion.
       
                  scuff3d wrote 1 day ago:
                  I'm assuming a "patch" is a group of commits. So would a
                  "patch series" be similar to GitLabs notion of dependent MRs?
       
                    noirscape wrote 1 day ago:
                    There's basically 2 major schools of thought for submitting
                    patches under git:
                    
                    * Pile of commits - each individual commit doesn't matter
                    as much as they all work combined. As a general rule, the
                    only requirement for a valid patch is that the final
                    version does what you say it does. Either the final result
                    is squashed together entirely and then merged onto "master"
                    (or whatever branch you've set up to be the "stable" one)
                    or it's all piled together. Keeping the commit history one
                    linear sequence of events is the single most important
                    element here - if you submit a patch, you will not be
                    updating the git hashes because it could force people to
                    reclone your version of the code and that makes it
                    complicated. This is pretty easy to mentally wrap your head
                    around for a small project, but for larger projects quickly
                    makes a lot of the organizatory tools git gives you filled
                    with junk commits that you have to filter through. Most git
                    forges encourage this PR system because it's again, newbie
                    friendly.
                    
                    * Patch series. Here, a patch isn't so much a series of
                    commits you keep adding onto, but is instead a much smaller
                    set of commits that you curate into its "most perfect form"
                    - each individual commit has its own purpose and they
                    don't/shouldn't bleed into each other. It's totally okay to
                    change the contents of a patch series, because until it's
                    merged, the history of the patch series is irrelevant as
                    far as git is concerned. This is basically how the LKML
                    (and other mailing list based) software development works,
                    but it can be difficult to wrap your head around (+years of
                    advice that "changing history" is the biggest sin you can
                    do with git, so don't you dare!). It tends to work the best
                    with larger projects, while being completely overkill for a
                    smaller tool. Most forges usually offer poor support for
                    patch series based development, unless the forge is
                    completely aimed at doing it that way.
       
                      derefr wrote 18 hours 25 min ago:
                      > It's totally okay to change the contents of a patch
                      series, because until it's merged, the history of the
                      patch series is irrelevant as far as git is concerned.
                      
                      Under the original paradigm, the email list itself —
                      and a (pretty much expected/required) public archive of
                      such, e.g. [1] for LKML — serves the same
                      history-preserving function for the patch series
                      themselves (and all the other emails that go back and
                      forth discussing them!) that the upstream git repo does
                      for the final patches-turned-commits. The commits that
                      make it into the repo reference URLs of threads on the
                      public mailing-list archive, and vice-versa.
                      
                      Fun fact: in the modern era where ~nobody uses CLI email
                      clients any more, a tool called b4 ( [2] ) is used to
                      facilitate the parts of the git workflow that interact
                      with the mailing list. The subcommand that pulls patches
                      out of the list (`b4 mbox`) actually relies on the public
                      web archive of the mailing list, rather than relying on
                      you to have an email account with a subscription to the
                      mailing list yourself (let alone a locally-synced mail
                      database for such an account.)
                      
  HTML                [1]: https://lore.kernel.org
  HTML                [2]: https://b4.docs.kernel.org/
       
                      scuff3d wrote 20 hours 42 min ago:
                      That makes sense. The first one sounds like basically any
                      PR workflow on GitHub/GitLab whatever. Though I don't
                      really care if people squash/reorder their commits. The
                      only time it's annoying is if someone else branched off
                      your branch and the commit gets rebased out from under
                      them. Though I think rebase --onto helps resolve that
                      problem.
                      
                      The second one makes sense, but I can't imagine actually
                      working that way on any of the projects I've been in. The
                      amount of work it would take just doesn't make sense. Can
                      totally understand why it would be useful on something
                      like the Linux Kernel though.
       
                    kragen wrote 1 day ago:
                    You normally have one patch per commit.  The patch is the
                    diff between that commit and its parent.  (I forget how git
                    format-patch handles the case where there are two parents.)
       
                      Cogito wrote 1 day ago:
                      > (I forget how git format-patch handles the case where
                      there are two parents.)
                      
                      As per [0] merge commits are dropped:
                      
                      Note that format-patch will omit merge commits from the
                      output, even if they are part of the requested range. A
                      simple "patch" does not include enough information for
                      the receiving end to reproduce the same merge commit.
                      
                      I originally thought it would use --first-parent (so just
                      diff vs the first parent, which is what I would want) but
                      apparently no! It is possible to get this behaviour using
                      git log as detailed in this great write-up [1].
                      
                      [0] [1]
                      
  HTML                [1]: https://git-scm.com/docs/git-format-patch#_cavea...
  HTML                [2]: https://stackoverflow.com/questions/2285699/git-...
       
                        kragen wrote 1 day ago:
                        Thanks! I had no idea.
       
                      scuff3d wrote 1 day ago:
                      If that's the case I'm assuming the commit itself is
                      quite large then? Or maybe it would more accurate to say
                      it can be large if all the changes logically go together?
                      
                      I'm thinking in terms of what I often see from people I
                      work with, where a PR is normally made up of lots of
                      small commits.
       
                        kragen wrote 1 day ago:
                        The idea is that you divide a large change into a
                        series of small commits that each make sense in
                        isolation, so that Linus or Greg Kroah-Hartman or
                        whoever is looking at your proposed change can
                        understand it as quickly as possible—hopefully in
                        order to accept it, rather than to reject it.
       
                          scuff3d wrote 1 day ago:
                          Gotcha, that makes sense. Thanks, I've always been
                          curious about how the Linux kernel works.
       
                            kragen wrote 1 day ago:
                            I may not be the best source for information, not
                            having written anything worth contributing myself.
       
                              scuff3d wrote 1 day ago:
                              Well I appreciate it none the less.
                              
                              I think the point I always get stuck on is how
                              small is "small" when we're talking about
                              commits/patches. Like if you're adding a new
                              feature (to anything, not necessarily the Linux
                              Kernel), should the entire feature be a single
                              commit or several smaller commits? I go back and
                              forth on this all the time, and if you research
                              you're gonna see a ton of different opinions.
                              I've seen some people argue a commit should
                              basically only be a couple lines of code changed,
                              and others argue it should be the entire feature.
                              
                              You commonly hear Linus talk about
                              commits/patches having very detailed descriptions
                              attached to them. I have trouble believing people
                              would have time for that if each commit was only
                              a few lines, and larger features were spread out
                              over hundreds of commits.
       
                                kragen wrote 1 day ago:
                                When I'm reviewing commits, I find it useful to
                                see refactoring, which doesn't change behavior,
                                separated from functional changes, and for each
                                commit to leave the tree in a working, testable
                                state.    This is also helpful for git bisect.
                                
                                Often, a change to a new working state is
                                necessarily bigger than a couple of lines, or
                                one of the lines has to get removed later.
                                
                                I don't want to have to say, "Hmm, I wonder if
                                this will work at the end of the file?" and
                                spend a long time figuring out that it won't,
                                then see that the problem is fixed later in the
                                patch series.
                                
                                Other people may have other preferences.
       
                Certhas wrote 1 day ago:
                In another post discussion, someone suggested git as an
                alternative to overleaf, a Google Docs for latex... I guess
                there are plenty of people with blind spots for things that are
                technically possible, and usabel to experts, and UI that
                actually empowers much broader classes of users to wield the
                feature.
       
                  cozzyd wrote 1 day ago:
                  Is the joke that overleaf has decent git integration?
       
                    jdingel wrote 1 day ago:
                    Overleaf doesn't support branches, etc.
       
                      cozzyd wrote 1 day ago:
                      That's true, but at least it's possible to use via git.
                      Great for working during a flight etc
       
                  pastel8739 wrote 1 day ago:
                  If you actually use the live collaboration features of
                  overleaf, sure, it’s not a replacement. But lots of people
                  use overleaf to write latex by themselves. The experience is
                  just so much worse than developing locally and tracking
                  changes with git.
       
              seba_dos1 wrote 1 day ago:
              They are available as refs on the remote to pull though, they
              just aren't listed so don't end up mirrored either.
       
                masklinn wrote 1 day ago:
                They are listed tho. You can very much see them in info/refs.
       
                  seba_dos1 wrote 1 day ago:
                  My bad! I got misled by grandparent - they are in fact
                  mirrored with "git clone --mirror" as well.
       
          Sharlin wrote 1 day ago:
          Git was always explicitly a decentralized, "peer to peer" version
          control system, as opposed to centralized ones like SVN, with nothing
          in the protocol itself that makes a distinction between a "server"
          and a "client". Using it in a centralized fashion is just a workflow
          that you choose to use (or, realistically, one that somebody else
          chose for you). Any clone of a repository can be a remote to any
          other clone, and you can easily have a "git server" (ie. just another
          directory) in your local filesystem, which is a perfectly reasonable
          workflow in some cases.
       
            JamesLeonis wrote 1 day ago:
            I have a use case just for this. Sometimes my internet goes down
            while I'm working on my desktop computer. I'll put my work in a
            branch and push it to my laptop, then go to a coffee shop to
            continue my work.
       
              chipsrafferty wrote 1 day ago:
              I just copy files on a USB drive
       
              kragen wrote 1 day ago:
              When I do this I usually push to a bare repo on a USB pendrive.
       
            jonhohle wrote 1 day ago:
            This is a better summary than mine.
            
            There was a thread not to long ago where people were conflating git
            with GitHub. Git is an incredible tool (after coming from
            SVN/CVS/p4/source safe) that stands on its own apart from hosting
            providers.
       
              Sharlin wrote 1 day ago:
              And GitHub naturally has done nothing to disabuse people of the
              interpretation that git = GitHub. Meanwhile, the actual raison
              d'etre for the existence of git of course doesn't use GitHub, or
              the "pull request" based workflow that GitHub invented and is
              also not anything intrinsic to git in any way.
       
            webstrand wrote 1 day ago:
            It's a little more complex than that. Yes git can work in a
            peer-to-peer fashion, but the porcelain is definitely set up for a
            hub-and-spoke model, given how cloning a remote repo only gives you
            a partial copy of the remote history.
            
            There's other stuff too, like git submodules can't be configured to
            reference another branch on the local repository and then be cloned
            correctly, only another remote.
       
              Sophira wrote 1 day ago:
              > given how cloning a remote repo only gives you a partial copy
              of the remote history
              
              You may be thinking of the optional -depth switch, which allows
              you to create shallow clones that don't have the full history. If
              you don't include that, you'll get the full history when cloning.
       
                seba_dos1 wrote 1 day ago:
                You only get it actually full with "--mirror" switch, but for
                most use-cases what you get without it is already "full
                enough".
       
              jonhohle wrote 1 day ago:
              > given how cloning a remote repo only gives you a partial copy
              of the remote history
              
              When you clone you get the full remote history and all remote
              branches (by default). That’s painfully true when you have a
              repo with large binary blobs (and the reason git-lfs and others
              exist).
       
                webstrand wrote 1 day ago:
                You're right, I got that part wrong, git actually fetches all
                of the remote commits (but not all of the refs, many things are
                missing, for instance notes).
                
                But a clone of your clone is not going to work the same way,
                since remote branches are not cloned by default, either. So
                it'll only have partial history. This is what I was thinking
                about.
       
                  kragen wrote 1 day ago:
                  Notes aren't refs.
       
                    webstrand wrote 22 hours 18 min ago:
                    Yes they are, they get put in /refs/notes?
                    
                        git fetch origin refs/notes/*:refs/notes/*
                    
                    is the command you have to run to actually clone remote
                    refs if you're making a working-copy clone?
       
                      kragen wrote 15 hours 15 min ago:
                      You're right.  Thank you for the correction.
       
              isaacremuant wrote 1 day ago:
              I'd say git submodules have such an awkward UX that should
              probably not be used except in very rare and organized cases.
              I've done it before but it has to be worth it.
              
              But I get your larger point.
       
                seba_dos1 wrote 1 day ago:
                And they're often (not always) used where subtrees would fit
                better.
       
                  webstrand wrote 1 day ago:
                  I can't get over my fear of subtrees after accidentally
                  nuking one of my repos by doing a rebase across the subtree
                  commit. I've found that using worktrees, with a script in the
                  main branch to set up the worktrees, works pretty well to
                  split history across multiple branches, like what you might
                  want in a monorepo.
                  
                  Sadly doing a monorepo this way with pnpm doesn't work, since
                  pnpm doesn't enforce package version requirements inside of a
                  pnpm workspace. And it doesn't record installed version
                  information for linked packages either.
       
        globular-toast wrote 1 day ago:
        The proper way to do this to make a "bare" clone on the server (a clone
        without a checked out branch). I was doing this in 2010 before I even
        signed up to GitHub.
       
        blueflow wrote 1 day ago:
        Docs: [1]
        
  HTML  [1]: https://git-scm.com/docs/git-clone#_git_urls
  HTML  [2]: https://git-scm.com/docs/git-init
       
        eqvinox wrote 1 day ago:
        git clone ssh://username@hostname/path/to/repo
        
        this is equivalent to:
        
          git clone username@hostname:path/to/repo
        
        and if your usernames match between local and remote:
        
          git clone hostname:path/to/repo
        
        (if the path has no leading /, it is relative to your home directory on
        the remote)
       
          mlrtime wrote 1 day ago:
          I never thought about this.. I've had the following problem in the
          past.
          
          Host A, cannot reach official github.com.  But Host B can and has a
          local copy of a repo cloned.  So Host B can 'git clone ssh://' from
          Host A which is essentially equivalent, but just setting origin to
          Host B instead of github.com, sort of acting as a manual proxy?
          
          What if Host A is natted, so Host B can ssh to Host A but not the
          reverse, can Host G ssh clone to Host A to push changes?
          
          In the rare times I've needed this, I just 'rsync -av --delete' a
          repo from B->A.
       
            eqvinox wrote 22 hours 58 min ago:
            You might want to read the documentation on "git remote". You can
            pull/fetch and push between repos quite arbitrarily, with plain
            ssh.
       
          DrNefario wrote 1 day ago:
          Git also respects your `~/.shh/config`, which lets you change
          settings per host. I've set up each of my GitHub accounts with a
          different "host" so it's easy to switch between them.
       
        XorNot wrote 1 day ago:
        My git "server" is a folder of bare git repositories in home directory
        which I share with Syncthing.
        
        It'd be great if there was more specific support. But in practice? No
        problems so far.
       
          tasuki wrote 1 day ago:
          Perhaps the only issue with this setup is that you lose some of the
          robustness of git: mess up one repo beoynd repair, and you've just
          messed up all the checkouts everywhere.
          
          I sync my repos manually (using GitHub as the always-on remote, but
          I'm not particularly attached to it). This gives me more resilience
          should I blow up the repo completely (hard to do, I know).
       
            XorNot wrote 1 day ago:
            They're not checked out repos. They're bare repos, which I then
            checkout from.
            
            The benefit is that git repos are essentially append only in this
            mode.
            
            The folder itself is scheduled into an encrypted backblaze backup
            too.
       
              tasuki wrote 1 day ago:
              Ok, fair!
       
        timmg wrote 1 day ago:
        There was a brief period when Google Cloud had support for hosting git
        on a pay-per-use basis.  (I think it was called Google Cloud
        Repositories.)    It had a clunky but usable UI.
        
        I really preferred the idea of just paying for what I used -- rather
        than being on a "freemium" model with GitHub.
        
        But -- as many things with Google -- it was shutdown.  Probably because
        most other people do prefer the freemium model.
        
        I wonder if this kind of thing will come back in style someday, or if
        we are stuck with freemium/pro "tiers" for everything.
       
        johnisgood wrote 1 day ago:
        I am surprised how little software engineers (even those that use) know
        about git.
       
          general1465 wrote 1 day ago:
          That's because git is hard to use and full of commands which are
          making no sense. Eventually people will learn to just clone, pull,
          push and occasionally merge and be done with it.
       
            jonhohle wrote 1 day ago:
            People said the same thing about “advanced” operations with cvs
            and svn (and just called their admin for p4 or source safe). But I
            really don’t understand the sentiment.
            
            Managing code is one of the cornerstones of software engineering.
            It would be like refusing to learn how to use a screwdriver because
             someone really just wants to hammer things together.
            
            The great thing about $your-favorite-scm is that it transcends
            language or framework choices and is fungible for early any
            project, even outside of software. I’m surprised it isn’t part
            of more professional tools.
       
              general1465 wrote 1 day ago:
              > Managing code is one of the cornerstones of software
              engineering. It would be like refusing to learn how to use a
              screwdriver because someone really just wants to hammer things
              together.
              
              Ok, then make the commands make sense. For example 90%+ people
              has no idea what rebase does, yet it is a useful command.
              
              People does not want to learn with git outside of what works,
              because they can't experiment. The moment they will "hold it
              wrong" whole repo will break into pieces, unable to go forward or
              backwards. Hopefully they did not commit.
              
              Git feels like a hammer covered in razor blades. The moment you
              will try to hold it differently you will cut yourself and
              somebody else will need to stich you up.
       
                johnisgood wrote 1 day ago:
                You can definitely experiment. Make a copy of the directory and
                experiment on that copy without pushing. I have done it a
                million times.
       
                  cls59 wrote 16 hours 8 min ago:
                  Also, "git reflog" lists out all commit SHAs in chronological
                  order. Trying to figure out how to rebase, but got lost and
                  everything seems broken? You're just one "git reset" away
                  from the better place you were in and "reflog" has the list.
       
            skydhash wrote 1 day ago:
            Most people would just use `app` `app (final)`, `app (final 2)`,
            etc... VCS exists for a reason and I strongly believe that the
            intersection of people that say git is    hard and people that do not
            know why you want a VCS is nearly a perfect circle.
       
              xigoi wrote 1 day ago:
              Being a useful tool does not justify having an extremely
              inconsistent and unnecessarily confusing UI.
       
        max_ wrote 2 days ago:
        I tried this and it is never as smooth as described.
        
        Why is GitHub popular? its not because people are "dumb" as others
        think.
        
        Its because GitHub "Just Works".
        
        You don't need obscure tribal knowledge like seba_dos1 suggests [0] or
        this comment [1] The official Git documentation for example has its own
        documentation that I failed to get work. (it is vastly different from
        what OP is suggesting)
        
        The problem with software development is that not knowing such "tribal
        knowledge" is considered incompetence.
        
        People don't need to deal with obscure error messages which is why they
        choose GitHub & why Github won.
        
        Like the adge goes, "Technology is best when it is invisible"
        
        [0] [2] [1]
        
  HTML  [1]: https://news.ycombinator.com/item?id=45711294
  HTML  [2]: https://news.ycombinator.com/item?id=45711236
  HTML  [3]: https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-th...
       
          jrm4 wrote 1 day ago:
          It is "incompetence," or at the very least, it is unwise.
          
          At the very least, a huge part of the intent of Git's very design was
          decentralization; though as is the case with many good tools, people
          don't use them as they are designed.
          
          Going further, simply because "deeply centralized Git" is very
          popular, does not AT ALL determine that "this is the better way to do
          things." Please don't frame it as if "popular equals ideal."
       
          throwaway106382 wrote 1 day ago:
          God forbid we learn how our tools work.
       
          herpdyderp wrote 1 day ago:
          > Its because GitHub "Just Works".
          
          Unfortunately (though expected), ever since Microsoft took over this
          has devolved into GitHub "sometimes works".
       
          imiric wrote 1 day ago:
          > Why is GitHub popular? its not because people are "dumb" as others
          think.
          
          > Its because GitHub "Just Works".
          
          Git also "just works". GitHub simply offers a higher level of
          abstraction, a graphical UI, and some value-add features on top of
          Git. How much better all this really is arguable. I would say that
          it's disastrous that most developers rely on a centralized service to
          use a distributed version control system. Nevermind the fact that the
          service is the single largest repository of open source software,
          owned and controlled by a giant corporation which has historically
          been hostile to OSS.
          
          GitHub "won" because it came around at the right time, had "Git" in
          its name—which has been a perpetual cause of confusion w.r.t. its
          relation with Git—, and boosted by the success of Git itself
          largely due to the cult of personality around Linus Torvalds. Not
          because Git was technically superior, or because GitHub "just works".
          
          > You don't need obscure tribal knowledge
          
          As others have said, a bare repository is certainly not "tribal
          knowledge". Not anymore than knowing how to use basic Git features.
          
          > Like the adge goes, "Technology is best when it is invisible"
          
          Eh, all technology is an abstraction layer built on top of other
          technology. Whether it's "invisible" or not depends on the user, and
          their comfort level. I would argue that all abstractions also make
          users "dumber" when it comes to using the layers they abstract. Which
          is why people who only rely on GitHub lose the ability, or never
          learn, to use Git properly.
       
          bitbasher wrote 1 day ago:
          I understand where you're coming from, but this seems like a terrible
          defeatist attitude to have.
          
          What are we supposed to do ... throw our hands up because GitHub won?
          
          I'll be down voted, but I'll say it. If you hold that attitude and
          you don't learn the fundamentals, if you don't understand your tools,
          you're a bad developer and a poor craftsman. You're not someone I
          would hire or work with.
       
            Tomis02 wrote 1 day ago:
            git is not "the fundamentals". It's a tool that's very difficult to
            learn but we are forced to use because "it won" at some point.
            
            Git's difficulty is NOT intrinsic; it could be a much better tool
            if Torvalds were better at UX. In short, I don't blame people who
            don't want to "learn git". They shouldn't have to learn it anymore
            than one learns to use a browser or Google docs.
       
              bitbasher wrote 1 day ago:
              Git isn't fundamental, but version control is. If you're doing
              development without it, you're making a mistake.
              
              You're likely using a VCS, which is likely git (or jj, hg,
              fossil, tfs, etc).
              
              Therefore, you should know how to use whatever you're using. It
              shouldn't be "invisible" to you.
       
          motorest wrote 1 day ago:
          > I tried this and it is never as smooth as described.
          
          I think your comment shows some confusion that it's either the result
          or cause of some negative experiences.
          
          Starting with GitHub. The primary reason it "just works" is because
          GitHub, like any SaaS offering, is taking care of basic things like
          managing servers, authorization, access control, etc.
          
          Obviously, if you have to setup your own ssh server, things won't be
          as streamlined as clicking a button.
          
          But that's obviously not the point of this post.
          
          The point is that the work you need to do to setup a Git server is
          way less than you might expect because you already have most of the
          things already set, and the ones that aren't are actually low-hanging
          fruit.
          
          This should not come as a surprise. Git was designed as a distributed
          version control system. Being able to easily setup a stand-alone
          repository was a design goal. This blog post covers providing access
          through ssh, but you can also create repositories in any mount point
          of your file system, including in USB pens.
          
          And, yes, "it just works".
          
          > The official Git documentation for example has its own
          documentation that I failed to get work. (it is vastly different from
          what OP is suggesting)
          
          I'm sorry, the inability to go through the how-to guide that you
          cited has nothing to do with Git. The guide only does three things:
          create a user account, setup ssh access to that account, and create a
          Git repository. If you fail to create a user account and setup ssh,
          your problems are not related to Git. If you created a user account
          and successfully setup ssh access, all that is missing is checking
          out the repo/adding a remote repo. If you struggle with this step,
          your issues are not related to Git.
       
          Dylan16807 wrote 1 day ago:
          People use github because it has a bunch of features outside git and
          because they don't already have a server.
          
          Not because it's hard or obscure to put git on your server.
       
          vaylian wrote 1 day ago:
          > Its because GitHub "Just Works".
          
          IPv6 still doesn't work with GitHub:
          
  HTML    [1]: https://doesgithubhaveipv6yet.com/
       
          croes wrote 1 day ago:
          > Technology is best when it is invisible
          
          For normal users. Having this tribal knowledge is basically what
          makes developer and it’s their job to make technology invisible for
          others. Someone has to be behind the curtain.
       
          blueflow wrote 1 day ago:
          > obscure tribal knowledge
          
          The knowledge is neither obscure nor tribal, it is public and
          accessible. And likely already on your system, in the form of
          man-pages shipped with your git binaries.
          
          > The problem with software development is that not knowing such
          "tribal knowledge" is considered incompetence.
          
          Consequently.
       
          myaccountonhn wrote 1 day ago:
          The cool thing is that once you know how simple it is to self host (I
          certainly didn't know before, just used github), you learn a skill
          that you can apply to many different contexts, and understand better
          what actually goes on in systems you depend on. That's what these
          "tech should be invisible" people miss, where they tout that you
          should instead learn SASS solutions where you have zero ownership nor
          agency, instead of taking the time to learn transferable skills.
       
          seba_dos1 wrote 2 days ago:
          Are basic git concepts like bare repos "obscure tribal knowledge"
          these days? What do you think ".git" directory is?
       
            general1465 wrote 1 day ago:
            Now try to add a submodule X (which is a bare repository) to your
            repository Y
            
            Good job, now you can't add it nor remove it, without manually
            removing it in .git folder.
       
            have_faith wrote 1 day ago:
            The vast majority of developers working with git daily don’t know
            what a bare repo is, or that it exists at all. It’s not obscure
            knowledge as such, it’s just never come up for them as something
            they need.
       
              seba_dos1 wrote 1 day ago:
              The vast majority [0] of developers working with git daily have
              no mental model of git repos and just do a mental equivalent of
              copy'n'pasting commands and it's enough to let them do their work
              (until something breaks at least), so it doesn't seem like a
              particularly good indicator of whether something is obscure or
              not. There are many obscure things hiding in git, bare repos
              aren't one of those :)
              
              [0] Source: pulled out of my arse.
       
            max_ wrote 2 days ago:
            Having to make a repo bare to not have issues with branches is
            definitely obscure.
       
              seba_dos1 wrote 1 day ago:
              It's obvious as soon as you consider that your push will
              overwrite a ref that's currently checked out in the target's repo
              workdir. The exact same thing happens when pushing to local
              repos. You don't have to make a repo bare to avoid this issue,
              but it's certainly the easiest way to avoid it altogether when
              you don't need a workdir on the server side.
       
                Dylan16807 wrote 1 day ago:
                It's obvious that it needs to update the ref.  It's not obvious
                that this would cause any problems.  You could fix HEAD as part
                of writing the ref.  Automatically managing HEAD is normal git
                behavior.
       
                  seba_dos1 wrote 1 day ago:
                  It's obvious that something non-obvious would have to happen
                  with the workdir behind the user's back. Imagine that you are
                  working with your workdir while someone else pushes something
                  to your repo. Bailing out is the only sane option (unless
                  something else has been explicitly requested by the user).
       
                    Dylan16807 wrote 1 day ago:
                    Nothing has to happen to the workdir if you fix HEAD.
       
                      seba_dos1 wrote 1 day ago:
                      ...except of all the things that rely on the HEAD
                      pointing to another ref now changing their behavior. gbp
                      will by default bail off if HEAD is not on a branch, "git
                      commit" won't update the ref you thought it will cause
                      you're now suddenly on "detached HEAD" etc.
       
                        Dylan16807 wrote 1 day ago:
                        I've never heard of... debian package builder?    I don't
                        care if it gets annoyed; if that's one of the biggest
                        issues then that's a good sign for the method.
                        
                        Yes the commit won't be on the branch you want, but
                        you'd get about the same issue if the two repos had a
                        bare upstream.    The branch diverges and you need to
                        merge.    It's a bit less ergonomic here but could be
                        improved.  Git could use branch following improvements
                        in general.
       
                          seba_dos1 wrote 1 day ago:
                          > Yes the commit won't be on the branch you want, but
                          you'd get about the same issue if the two repos had a
                          bare upstream.
                          
                          Not at all. The commit would have "landed" on the
                          exact branch you thought it will. How it will be
                          reconciled with a diverged remote branch is
                          completely orthogonal and may not even be of concern
                          in some use cases at all.
       
                            Dylan16807 wrote 1 day ago:
                            The situation is almost identical except you don't
                            have a cute name for your new commit.  Let's say
                            you add a ref to your detached commit, perhaps
                            local/foo.  Then you're looking at a divergence
                            between foo and local/foo.  If you had a bare
                            upstream it would be a divergence between
                            origin/foo and foo.  No real difference.  And if
                            you don't want to reconcile then you don't have to.
                            
                            If git was a tiny bit smarter it could remember you
                            were working on "foo" even after the ref changes.
       
                              seba_dos1 wrote 1 day ago:
                              Of course it could, but that doesn't yet mean it
                              should. A checked-out ref is considered to be
                              in-use and not to be manipulated (unless done in
                              tandem with HEAD), not just by "git push" but
                              also other tools like "git branch". It's
                              consistent and, IMO, less surprising than what
                              you propose. It could be an optional behavior
                              configured by receive.denyCurrentBranch, though I
                              don't see a good use-case for it that isn't
                              already handled by updateInstead.
       
                                Dylan16807 wrote 1 day ago:
                                If someone pushes to a repo you should expect
                                the refs to change.  In some sense doing
                                nothing avoids surprise but it's a bad way to
                                avoid surprise.
                                
                                But my real point is that refusing to act is
                                not "the only sane [default] option" here. 
                                Mild ergonomic issues aren't a disqualifier.
       
                                  seba_dos1 wrote 1 day ago:
                                  If you use "git branch -d" you should expect
                                  the ref to be deleted, and yet:
                                  
                                  > error: cannot delete branch 'main' used by
                                  worktree at '/tmp/git'
                                  
                                  You could build the system differently and
                                  what seems like a sane default would be
                                  different there, but it would be a different
                                  system. In this system, HEAD isn't being
                                  manipulated by things not meant to manipulate
                                  it.
       
              jdboyd wrote 1 day ago:
              It wasn't obscure before GitHub.
              
              Still, I like the online browser, and pr workflow.
       
                zenmac wrote 1 day ago:
                Yeah this is the best arguments for GitHub type of web git GUI.
                Not knowing bare repo seems just like a devs not reading docs.
                And I'm sorry in this day and age devs needs to keep up and
                just type git like curl http://...../install.sh type of thing.
                
                However would NEVER trust Github since the MS acquisition.
                codeberg and [1] are perfectly sound FOSS alternative to GitHub
                and GigLabs nowdays.
                
  HTML          [1]: https://forgejo.org
       
        bitbasher wrote 2 days ago:
        I have been doing this for many years.
        
        If you want a public facing "read only" ui to public repositories you
        can use cgit ( [1] ) to expose them. That will enable others to git
        clone without using ssh.
        
        I keep my private repositories private and expose a few public ones
        using cgit.
        
  HTML  [1]: https://git.zx2c4.com/cgit/about/
       
          zyxin wrote 1 day ago:
          I sent over a cgit web page for a take home assessment and the
          interviewer was horribly confused. I assume they have never seen
          anything apart from github before...
       
            bitbasher wrote 22 hours 56 min ago:
            I imagine the interviewer was one of those senior developers that
            still thinks git == github.
       
          LambdaComplex wrote 1 day ago:
          Huh, I never realized that cgit was created by the same person that
          created wireguard.
       
        ezst wrote 2 days ago:
        As a git user "not by choice" (my preference going for mercurial every
        single day), I never understood why git needs this distinction between
        bare/non-bare (or commit vs staging for that matter). Seems like yet
        another leaky abstraction/bad design choice.
       
          mamcx wrote 1 day ago:
          Jujutsu make the interaction sane, and is far more logical than even
          mercurial (that I love too!)
       
          rester324 wrote 1 day ago:
          Yeah, I am in the same boat. My files are either non-staged or
          committed about 99.99% of the time. For me the concept of staging is
          completely useless
       
            ezst wrote 1 day ago:
            Though I wasn't trying to say that there is no need for
            staging-based workflows,  
            I was just saying that there is nothing in terms of convenience or
            capabilities that the staging area does that can't be achieved with
            just regular commits and amending/rewriting.
            
            The immediate response from many git users when confronted to
            alternative VCSes is "well, it doesn't have a staging area, so it's
            obviously inferior" instead of going with "let's see how
            differently they approach this problem, and perhaps I will like
            it/git isn't all perfect after all".
       
            mbork_pl wrote 1 day ago:
            Staging is immensely useful in more than one case.
            
            Case one: WiP on a branch, code is pretty stable, but I want to do
            some experiments which will likely be deleted. I stage everything
            and then make (unstaged) changes which I can then undo with two
            keystrokes.
            
            Case two: I'm reviewing a complex PR, so I first merge it with
            --no-commit, then unstage everything and then stage chunks (or even
            individual lines) I have already reviewed.
            
            Case three: I was coding in the state of flow and now I have a lot
            of changes I want to commit, but I want to separate them into
            several atomic commits. I stage the first batch of changes, commit,
            then stage another, commit, etc.
            
            There are probably more, but these three alone are worth having the
            staging area.
       
              ezst wrote 1 day ago:
              Case one: you don't need staging. You can stash, or just commit
              and checkout HEAD^
              
              Case two: you don't need staging. You can stash, and then unstash
              incrementally (I would be shocked if git doesn't have the
              equivalent of `hg unshelve --interactive` and its keyboard
              friendly TUI)
              
              Case three: you don't need staging. You can just edit/amend the
              series (rebase --interactive I believe you guys call that).
              
              That is to say, all that you want to put in your stash, you could
              commit directly to the DAG, and edit to your convenience, with
              the regular history-rewriting tools already at your disposal. And
              the off-DAG stuff can be handled by stash (and even there, a
              normal commit that you would rebase to its destination would
              perfectly do).
              
              And what I described is incidentally what [1] does, which can be
              pretty-well described as "taking the best of all major VCSes"
              
  HTML        [1]: https://github.com/jj-vcs/jj
       
          skydhash wrote 1 day ago:
          A repo is as database containing a tree of commits. Then you got the
          concept of branch, which points to a specific commit. Then there's
          the special pointer HEAD which is the latest commit for the current
          worktree.
          
          When you checkout (now switch) a branch, HEAD is now the same as the
          branch (they point to the same commit). When you do operation like
          commit, reset, reset,... both the head and the branch are updated. So
          if a remote node tries to update the local node via a push on its
          end, that would mess up the local worktree. So you create a bare repo
          instead which can't contain a local worktree.
          
          Note: A worktree is computed from the initial commit to the commit
          currently identified by HEAD by following the parent information on
          each commit.
          
          The staging area is like a stored snapshot of what you would like to
          commit. You can always create patch between HEAD and the worktree,
          edit it, and then save the patch as the commit, then apply the
          leftover to the worktree. The staging just make that easier. It's a
          WIP patch for the next commit.
       
            ezst wrote 1 day ago:
            So, exactly as DrinkyBird said: I still don't see a good reason for
            this distinction to exist, what's in the repo's history and in its
            working directory are orthogonal concepts that are tied together by
            a bad UX for no apparent gain.
       
            DrinkyBird wrote 1 day ago:
            Mercurial has no distinction between a bare repo and a non-bare
            repo: any repo can have a working copy or not. You can check out a
            working copy with `hg update somerevision`, or get rid of it with
            `hg update null`.
            
            You can push to a repo with a working copy, if you like; nothing
            will happen to that working copy unless you run `hg update`. Since
            you don’t need a working copy on the server’s repo, you never
            run `hg update` on it, and it’s effectively what git calls a bare
            repository.
       
            seba_dos1 wrote 1 day ago:
            > When you checkout (now switch) a branch, HEAD is now the same as
            the branch (they point to the same commit).
            
            Actually, HEAD now points to that branch, which in turn points to
            the commit. It's a different state than when HEAD points directly
            to the same commit (called "detached HEAD" in Git's terminology)
            where this issue doesn't happen as you're not on a branch at all.
            It's a subtle, but important distinction when trying to understand
            what happens under the hood there.
       
              skydhash wrote 1 day ago:
              You're right. I was trying to simplify.
       
        1oooqooq wrote 2 days ago:
        this article is very bad advice. this way things are extremely brittle
        and there's a reason all those settings are disabled by default. you
        will lose data, save from very specific use cases
        
        the vastly superior way is 'git bare' which is a first class supported
        command without hacky settings.
       
        ninkendo wrote 2 days ago:
        I remember the first time I tried git, circa 2006, and the first 3
        commands I tried were:
        
            git init
            git commit -am Initial\ commit
            git clone . ssh://server/path/to/repo
        
        And it didn’t work. You have to ssh to the remote server and “git
        init” on a path first. How uncivilized.
        
        Bitkeeper and a few other contemporaries would let you just push to a
        remote path that doesn’t exist yet and it’d create it. Maybe git
        added this since then, but at the time it seemed like a huge omission
        to me.
       
          HumanOstrich wrote 1 day ago:
          So at first you didn't understand the commands or how git works. You
          needed a couple more steps to get things working. But apparently you
          just gave up and you're still complaining about it 20 years later.
       
            ninkendo wrote 1 day ago:
            Lol I have no idea why you came away with “… and I never used
            git again!” from all that.
            
            I love git. It’s unironically one of my favorite pieces of
            software in the world. I’ve used it nearly every day for almost
            20 years.
            
            Maybe take a break from HN for a bit, I mean that sincerely.
       
          seba_dos1 wrote 1 day ago:
          Hah, you made me check whether clone's second argument can be a URL.
          But no, it's explicitly "", so it can't be used as a equivalent of
          "git push --mirror" :)
       
        seba_dos1 wrote 2 days ago:
        Just make the repository on the server side bare and you won't have to
        worry about checked out branches or renaming ".git" directory.
       
          cl3misch wrote 1 day ago:
          > This is a great way to [...] work on server-side files without
          laggy typing or manual copying
          
          This is the usecase mentioned in the article and it wouldn't work
          with a bare repo. But if the server your SSH'ing to is just a central
          point to sync code across machines, then you're right: multiple hoops
          mentioned in the article are solved by having the central repo bare.
       
            kragen wrote 1 day ago:
            See [1] for a worked example of how it works with a bare repo.
            
  HTML      [1]: https://news.ycombinator.com/item?id=45713074
       
            liveoneggs wrote 1 day ago:
            yeah it seems odd that they don't just have remote>
            $HOME/repos/foo.git and then clone from there locally and remotely
       
              seba_dos1 wrote 1 day ago:
              FWIW a single user working on a remote versioned directory is
              indeed a reasonable use-case for
              receive.denyCurrentBranch=updateInstead, but IMO the article
              should have made it clear that it's not necessarily a good choice
              in general.
       
        imiric wrote 2 days ago:
        I've used Git over SSH for several years for personal projects. It just
        works with no additional overhead or maintenance.
        
        Tip: create a `git` user on the server and set its shell to
        `git-shell`. E.g.:
        
          sudo useradd -m -g git -d /home/git -s /usr/bin/git-shell git
        
        You might also want to restrict its directory and command access in the
        sshd config for extra security.
        
        Then, when you need to create a new repository you run:
        
          sudo -u git git init --bare --initial-branch=main
        /home/git/myrepo.git
        
        And use it like so:
        
          git clone git@myserver:myrepo.git
        
        Or:
        
          git remote add myserver git@myserver:myrepo.git
          git push -u myserver main
        
        This has the exact same UX as any code forge.
        
        I think that initializing a bare repository avoids the workarounds for
        pushing to a currently checked out branch.
       
          amelius wrote 1 day ago:
          Yes, that's how I use it too.
          
          However, this setup doesn't work with git-lfs (large file support).
          Or, at least I haven't been able to get it working.
          
          PS: Even though git-shell is very restricted you can still put shell
          commands in ~/git-shell-commands
       
            matrss wrote 1 day ago:
            To my knowledge git-lfs is only really designed to store your large
            files on a central server. I think it also uses its own out-of-band
            (from the perspective of git) protocol and connection to talk to
            that server. So it doesn't work with a standard ssh remote, and
            breaks git's distributed nature.
            
            For an actually distributed large file tracking system on top of
            git you could take a look at git-annex. It works with standard ssh
            remotes as long as git-annex is installed on the remote too (it
            provides its own git-annex-shell instead of git-shell), and has a
            bunch of additional awesome features.
       
          general1465 wrote 1 day ago:
          Yes exactly, creating git user on Linux machine and configuring it
          just for git turned out to be easiest way how to get Source Tree and
          Git in Windows work with it out of the box.
       
        singpolyma3 wrote 2 days ago:
        One note, xcode and maybe some other clients can't use http "dumb
        mode". Smart mode is not hard to set up, but it's a few lines of server
        config more than this hook.
        
        TIL about the update options for checked out branch. In practise though
        usually you want just the .git "bare" folder on server
       
          kragen wrote 1 day ago:
          This is the first time I've heard of a Git client that's too broken
          to use HTTP dumb mode.
       
          saagarjha wrote 2 days ago:
          I feel like this is a bug that Xcode should fix
       
            eptcyka wrote 1 day ago:
            There are many bugs Xcode should fix.
       
        nicce wrote 2 days ago:
        Interesting. I am just trying to decide between self-hosting Forgejo
        and other options for hosting Git in own private network.
       
          bluehatbrit wrote 1 day ago:
          I really like forgejo, but the git hosting bit is really just one
          feature for me. Their ci/cd runners, package repos and such are all
          things I need and forgejo includes in one nice bundle.
          
          If I was just using it for git hosting I'd probably go for something
          more light weight to be honest.
       
          PaulKeeble wrote 1 day ago:
          A lot of people do host Forejo but unless you are actually working
          with other people and they need a reduction in access via pull
          requests it doesn't do much other than provide a pretty GUI to look
          at. The bare SSH approach makes more sense for personal projects.
       
          orblivion wrote 1 day ago:
          One of my favorite tools available for Linux is called gitolite. It's
          in the Debian repo. [1] If you think the bare bones example is
          interesting and want something simple just for you or a small group
          of people, this is one step up. There's no web interface. The admin
          is a git repository that stores ssh public keys and a config file
          that defines repo names with an ACL. When you push, it updates the
          authorization and inits new repositories that you name.
          
          I put everything in repos at home and a have multiple systems
          (because of VMs) so this cleaned things up for me considerably.
          
  HTML    [1]: https://gitolite.com/gitolite/
       
            deltarholamda wrote 21 hours 14 min ago:
            gitolite is great. It's simple and extremely lightweight.
       
          01HNNWZ0MV43FF wrote 2 days ago:
          Forgejo is working okay for me.
          
          Hard to justify using SSH for Git. Principle of least power and all
          that
       
            skydhash wrote 1 day ago:
            Git is distributed, so there's no least power and all that.
            Everyone has full power over their own copy. What you want is one
            copy being the source of truth. You can always use HTTP for
            read-only access to that copy with a limited set of people having
            write access to update the copy. Patches can be shared via anything
            (git has built-in support for email).
       
          icy wrote 2 days ago:
          Could also consider running a Tangled knot (lightweight, headless git
          servers):
          
  HTML    [1]: https://tangled.org
       
            nicce wrote 2 days ago:
            That looks definitely interesting!
       
            rustman123 wrote 2 days ago:
            Does this support private repositories with collaboration?
       
              icy wrote 1 day ago:
              Not yet, unfortunately. Mostly due to protocol limitations—we
              use AT ( [1] ) for federation.
              
  HTML        [1]: https://atproto.com
       
                xeonmc wrote 1 day ago:
                can tangled support a forum-styled or subreddit-like thread
                discussion interface, on a per-repo basis so that "anyone could
                start a subreddit" via creating a discussion-only repo?
       
                  icy wrote 1 day ago:
                  We’ve considered this a lot. Our issues implementation is
                  threaded—perhaps more Stack Overflow-like. We’re thinking
                  of renaming it to Discussions, and having the actual issue
                  tracker be collaborators-only.
       
                  linguaz wrote 1 day ago:
                  Wondering if something like this could be implemented on
                  tangled with public-inbox: [1] > public-inbox implements the
                  sharing of an email inbox via git to complement or replace
                  traditional mailing lists. Readers may read via NNTP, IMAP,
                  POP3, Atom feeds or HTML archives.
                  
                  > public-inbox stores mail in git repositories as documented
                  in [2] and [3] > By storing (and optionally) exposing an
                  inbox via git, it is fast and efficient to host and mirror
                  public-inboxes.
                  
  HTML            [1]: https://public-inbox.org/README.html
  HTML            [2]: https://public-inbox.org/public-inbox-v2-format.txt
  HTML            [3]: https://public-inbox.org/public-inbox-v1-format.txt
       
          omani wrote 2 days ago:
          am using gitea. but thinking of switching to serve (charm).
       
            sesm wrote 2 days ago:
            I'm also using gitea, running on RPI5. Setup took like 15 mins,
            highly recommend.
       
              omani wrote 1 day ago:
              mine is running on an rpi zero w (v1). super low power
              consumption.
       
        thyristan wrote 2 days ago:
        Maybe I'm too old, but are there people that really didn't know that
        any ssh access is sufficient for using git?
       
          delta2uk wrote 1 day ago:
          I think if you're too young to know the earlier alternatives it's
          easy to overlook the distributed nature of git which made it
          different from them.
       
          al_borland wrote 1 day ago:
          I read an article not long ago where students coming out of a web dev
          bootcamp were unable to make a hello world html file and open it in
          their browser.
          
          We’ve gone so far with elaborate environments and sets to make it
          easy to learn more advanced things, that many people never learn the
          very basics. I see this as a real problem.
       
            UK-Al05 wrote 22 hours 24 min ago:
            With cors this can be awkward now.
       
              al_borland wrote 21 hours 0 min ago:
              For a simple hello world HTML file, cors should not enter the
              equation.
       
          tom_ wrote 1 day ago:
          Yep, me. I noticed that you sometimes use ssh:// URLs for GitHub, but
          I figured it was for authentication purposes only, and that once that
          step was over, some other thing came into play.
       
            djoanwnn wrote 17 hours 25 min ago:
            It usually is. Normally you don't put a full shell behind ssh if
            you only expose git, but of cause you can
       
          paradox460 wrote 1 day ago:
          In the past I've blown coworkers minds during github outages when I
          just pulled code from a co-worker's machine and kept working
          
          With remote, if your company stubbornly refuses to use a modern vpn
          like tailscale, and you can't really network between two computers
          easily, git format patch and git am, coupled with something like
          slack messages, works well enough, albeit moderately cumbersome
       
          nmz wrote 1 day ago:
          I don't think it has anything to do with being old, this is what
          happens when your software gets too complex that nobody even knows
          the very basics. This is highlighted with the documentation, if the
          software's documentation becomes lengthy enough that it could be an
          entire book, you're going to be getting these sorts of articles from
          time to time. Another highlight is bash, that is a huge man page for
          what should be a shell.
       
          thaumasiotes wrote 1 day ago:
          I was aware that it should be possible to interact directly with
          another repository on another machine (or, heck, in another directory
          on the same machine), which implies that ssh access is sufficient,
          but I was unaware of how that would be done.
       
          dogleash wrote 1 day ago:
          There's definitely generational loss about the capabilities of git.
          In the beginning, git was a new version control option for people to
          explore, now git is the thing you learn to collaborate on
          github/lab/ea. I remember the Chagon book coming out as the peak
          between dense technical explainers and paint by numbers how-tos.
       
          mewpmewp2 wrote 1 day ago:
          Depends on what you mean by using etc.
          
          If somebody asked me if it's possible to scp my git repo over to
          another box and use it there or vice versa, I would have said, yes,
          that is possible. Although I would've felt uneasy doing that.
          
          If somebody asked me if git clone ssh:// ... would definitely work, I
          wouldn't have known out of the gate, although I would have thought it
          would be neat if it did and maybe it does. I may have thought that
          maybe there must be some sort of git server process running that
          would handle it, although it's plausible that it would be possible to
          just do a script that would handle it from the client side.
          
          And finally, I would've never thought really to necessarily try it
          out like that, since I've always been using Github, Bitbucket, etc. I
          have thought of those as permanent, while any boxes I have could be
          temporary, so not a place where I'd want to store something as
          important to be under source control.
       
            Klonoar wrote 1 day ago:
            Am I misreading your comment?
            
            You’ve always used GitHub but never known it could work over ssh?
            Isn’t it the default method of cloning when you’re signed in
            and working on your own repository…?
       
              mewpmewp2 wrote 1 day ago:
              I have used SSH for GitHub of course, but the thought that I
              could also use it from any random machine to machine never
              occurred to me. And when it might occur to me, I would have
              thought that maybe SSH is used as a mechanism for authentication,
              but it might still require some further specialized server due to
              some unknown protocols of mine. I always thought of SSH or HTTPS
              as means of authentication and talking to the git server rather
              than the thing that processes cloning.
              
              E.g. maybe the host would have to have something like apt install
              git-server installed there for it to work. Maybe it wouldn't be
              available by default.
              
              I do know however that all info required for git in general is
              available in the directory itself.
       
                kragen wrote 1 day ago:
                Yes, SSH is used as a mechanism for authentication, and it
                still requires some further specialized server due to some
                protocols you don't know about.  The server is git-upload-pack
                or git-receive-pack, depending on whether you're pulling or
                pushing.  Your inference that a Linux distribution could
                conceivably put these in a separate package does seem
                reasonable, since for example git-gui is a separate package in
                Debian.  I don't know of any distros that do in fact package
                Git that way, but they certainly could.
       
          victorbjorklund wrote 1 day ago:
          I never thought about it. If somebody had asked me, yeah. Of course
          it makes sense. But it's just one of those things where I haven't
          thought about possibility.
       
          thwarted wrote 1 day ago:
          Yes. I've been subject to claims that a single person can't start a 
          project unless and until an official, centralized repo is setup for
          them. I've responded with "git init is all that is necessary to get
          started", but they wouldn't hear it.
       
            mewpmewp2 wrote 1 day ago:
            Depends, what's the reasoning? Because technically anyone can start
            a project even without Git. Or even without a computer. Someone can
            use a pen to write code on a paper.
            
            Depends on what you mean by "a project". If it's policy related,
            maybe it's company's policy that all code that is written must be
            stored in a certain way for multitude of reasons.
       
              thwarted wrote 1 day ago:
              They don't have a reason. There's no policy that keeps them from
              doing this. Sure, the whole point is to ultimately have the code
              in a common place where backups and code review can happen, but
              if it's a matter of starting something sooner because it takes a
              few days for the request to flow through to get things set up,
              they are not constrained by that AT ALL. They can create a git
              repo with git init immediately, start working, and once the repo
              is set up in the common area, git push all their work into it.
              Rather than train people on this, we spend time trying to hasten
              the common area repo setup time and put additional unneeded
              burden on the team responsible for that.
       
                mewpmewp2 wrote 1 day ago:
                What are you using for the centralized repos? Why does it take
                multiple days in the first place?
       
                  thwarted wrote 1 day ago:
                  It doesn't matter. They are centralized on servers that are
                  ssh accessible, creating it is effectively mkdir and git
                  init.
                  
                  It's not about how long the action takes, it's about how much
                  the team responsible for that is loaded and can prioritize
                  things. Every team needs more round tuits. Anyone who works
                  in an IT support role knows this. The point is that they can
                  self-service immediately and there is no actual dependency to
                  start writing code and using revision control, but people
                  will trot out any excuse.
       
                    mewpmewp2 wrote 1 day ago:
                    But why can't the teams themselves do it? All places I've
                    seen or been to have had teams able to create their own
                    repositories, either they use cloud Git providers like
                    Bitbucket, Gitlab or Github, or they have self hosted
                    Gitlab, Github etc.
       
                      maccard wrote 1 day ago:
                      Lots of places (unfortunately) restrict repo creation, or
                      CI pipeline creation. The platform team might need to
                      spin up the standard stack for your project, VON access
                      added for AWS environments etc etc. In the sorts of orgs
                      where this happens doing it properly is more important
                      than getting started.
       
              udev4096 wrote 1 day ago:
              You are missing the whole point. The OP is mentioning how people
              are so used to using github, that they are so oblivion on using
              git offline
       
                mewpmewp2 wrote 1 day ago:
                It just doesn't make sense to me unless it's a company policy
                type of thing.
       
                  thwarted wrote 1 day ago:
                  Exactly, it doesn't make sense other than that folks don't
                  actually know how to do even the most basic thing with git
                  (git init).
       
                    throwaway290 wrote 1 day ago:
                    Tons of people never even touch git cli, they use some gui
                    frontend/IDE.
                    
                    Tons of people who DO use git cli don't know git init.
                    Their whole life was create a project on github and clone
                    it. Anyway initting new project isn't the most "basic"
                    thing with git, it is used less than .01% of total git
                    commands
                    
                    if you combine the above easily MOST people have no idea
                    about git init
       
                thwarted wrote 1 day ago:
                Not even "offline", just "using git".
       
            fingerlocks wrote 1 day ago:
            You must work at Microsoft? A pound of paperwork for every new repo
            really shuts down experimental side projects. I showed my
            colleagues that we can share code via ssh or (painfully) one-drive
            anytime instead. They reacted like I was asking them to smoke crack
            behind the dumpsters. “That’s dangerous, gonna get in trouble,
            no way bro”
       
              mewpmewp2 wrote 1 day ago:
              If you are working in a large corp and not your own side project,
              that honestly does sound like a bad idea.
       
                udev4096 wrote 1 day ago:
                It's not. Unless you work at a shitty place, something like
                what OP mentioned is far from a big deal and most people would
                wanna know more about it
       
                  mewpmewp2 wrote 1 day ago:
                  The code is considered IP of the corp and they probably have
                  rules around how or where IP should be shared, access
                  controls etc.
       
                    fingerlocks wrote 1 day ago:
                    There is nothing inherently special about code, than say, a
                    confidential marketing deck or sales plan. If they can go a
                    network drive, or a service like One Drive , why can't we
                    put our code there? I'm not talking about the Xbox firmware
                    or the entire Windows source. This is about little one-off
                    projects, highly specialized tooling, or experimental
                    proof-of-concepts that are blocked by bureaucracy.
                    
                    It's a misguided policy that hurts morale and leaves a
                    tremendous amount of productivity and value on the floor.
                    And I suspect that many of the policies are in place simply
                    because a number of the rule makers aren't aware of how
                    easy it to share the code. Look how many in this thread
                    alone weren't aware of inherent distributability of git
                    repositories, and presumably they're developers. You really
                    think some aging career dev ops that worked at Microsoft
                    for 30 years is going to make sensical policies about some
                    software that was shunned and forbidden only a decade ago?
       
                      udev4096 wrote 1 day ago:
                      OP is fully soaked with corp bullshit. A fellow sellout!
                      Please read this:
                      
  HTML                [1]: https://geohot.github.io/blog/jekyll/update/2025...
       
                fingerlocks wrote 1 day ago:
                Please, elaborate. I can share my screen with coworkers and
                talk about all sorts of confidential things, and I can even
                give them full remote access to control everything if I wished.
                So why would pushing a some plain text code directly to their
                machine be so fundamentally different than all the other means
                of passing bits between our machines?
       
                  mewpmewp2 wrote 1 day ago:
                  If you share screen you are in control of what you show, if
                  you give someone SSH access, what would stop them from
                  passing/running a small script to fetch everything you have
                  or doing w/e with your computer? I mean it's a blatant
                  security violation to me. Just no reason to do that.
                  
                  In large corps you usually have policies to not leave your
                  laptop unattended logged in, in the office, that would be
                  potentially even worse than that.
       
                    fingerlocks wrote 1 day ago:
                    I wasn't aware that I could run a small script and fetch
                    everything from every host with an ssh git repo. TIL.
       
                      JoBrad wrote 1 day ago:
                      I mean…git hooks are just scripts. If you can fetch,
                      you can pull (or push) a script that executes locally.
       
          __MatrixMan__ wrote 1 day ago:
          I've been using git since 2007, this only dawned on me last year.
          
          Git is especially prone to the sort of confusion where all the
          experts you know use it in slightly different ways so the culture is
          to just wing it until you're your own unique kind of wizard who can't
          tie his shoes because he favors sandals anyhow.
       
            imbnwa wrote 13 hours 0 min ago:
            This is also vim
       
            kragen wrote 1 day ago:
            With previous version-control systems, such as SVN and CVS, I found
            that pair programming helps a great deal with this problem.  I
            started using Git after my last pair-programming gig,
            unfortunately, but I imagine it would help there too.
            
            (I started using Git in 02009, with networking strictly over ssh
            and, for pulls, HTTP.)
       
            haskellshill wrote 1 day ago:
            > all the experts you know use it in slightly different ways
            
            What? Knowing that a git repo is just a folder is nowhere near
            "expert" level. That's basic knowledge, just like knowing that the
            commits are nodes of a DAG. Sadly, most git users have no idea how
            the tool works. It's a strange situation, it'd be like if a
            majority of drivers didn't know how to change gears.
       
              throwaway2037 wrote 22 hours 28 min ago:
              > just like knowing that the commits are nodes of a DAG
              
              Hello gatekeeping!  I have used Git for more than 10 years.  I
              could not explain all of the ins-and-outs of commits, especially
              that they are "nodes of a DAG".  I do just fine, and Git is
              wonderful to me.  Another related example: I would say that 90%+
              of .NET and Java users don't intimately understand their virtual
              machine that runs their code.  Hot take: That is fine in 2025;
              they are still very productive and add lots of value.
       
                haskellshill wrote 22 hours 17 min ago:
                "Intimately understand the VM" is not the same as knowing what
                data structure you're using. It'd be comparable to not knowing
                the difference between an array and a linked list. Sure you may
                call it gatekeeping but likewise I may call your style willful
                ignorance of the basics of the tools you're using. Have you
                never used rebase or cherry-pick?
       
              jancsika wrote 1 day ago:
              > It's a strange situation, it'd be like if a majority of drivers
              didn't know how to change gears.
              
              If you literally can't change gears then your choices are a) go
              nowhere (neutral), b) burn out your clutch (higher gears), or c)
              burn out your engine (1st gear). All are bad things. Even having
              an expert come along to put you in the correct gear once, twice,
              or even ten times won't improve things.
              
              If a programmer doesn't know that git is a folder or that the
              commits are nodes of a DAG, nothing bad will happen in the short
              term. And if they have a git expert who can get them unstuck say,
              five times total, they can probably make it to the end of their
              career without having to learn those two details of git.
              
              In short-- bad analogy.
       
                haskellshill wrote 22 hours 14 min ago:
                It's an analogy, there's no need to analyze it literally. And
                no, I've worked with some devs who don't understand git
                (thankfully I don't anymore) and it was quite a bit more than
                "five times" they got stuck or messed up the repo on the remote
                in an annoying way. Sure, if you regularly write code using a
                bunch of evals or gotos "nothing bad will happen" but it's a
                very suboptimal way of doing things.
       
              __MatrixMan__ wrote 1 day ago:
              My point is only that the understanding is uneven. I'm ready to
              debate the merits of subtrees vs submodules but I didn't know the
              folder thing. Am I weird? Yes, but here is a place where weird is
              commonplace.
       
              rco8786 wrote 1 day ago:
              The majority of drivers DON’T know how to change gears.
              
              You are simultaneously saying that something is not expert level
              knowledge while acknowledging that most people don’t know it.
              Strange.
       
                haskellshill wrote 1 day ago:
                "Expert level knowledge" implies something more to me than
                simply few people knowing about it. It's ridiculous to say that
                knowing how to change gears makes you an expert driver, even if
                a minority know how to do it (such as in the US e.g.)
       
                seba_dos1 wrote 1 day ago:
                > The majority of drivers DON’T know how to change gears.
                
                I'm not sure that's true, unless you only take certain parts of
                the world into consideration.
       
                singpolyma3 wrote 1 day ago:
                I think the idea is it shouldn't be expert level. It used to be
                in every tutorial. But you're right these days it may indeed be
                expert level knowledge
       
            tianqi wrote 1 day ago:
            I like this comment. Over the years I've always found that whenever
            I see others using git, everyone uses it in different way and even
            for different purposes. This has left me really confused about what
            is the standard practice of Git exactly.
       
              AdrianB1 wrote 1 day ago:
              This is because people have different needs, Git is trying to
              cover too many things, there are multiple ways to achieve goals
              and therefore there is no standard practice. There is no single,
              standard way to cook chicken breast and that is a way simpler
              thing.
              
              The solution is to set team/department standards inside companies
              or use whatever you need as a single contributor. I saw attempts
              to standardize across a company that is quite decentralized and
              it failed every time.
       
                exasperaited wrote 1 day ago:
                >  there are multiple ways to achieve goals and therefore there
                is no standard practice
                
                This is ultimately where, and why, github succeeded. It's not
                that it was free for open source. It's that it ironed out lots
                of kinks in a common group flow.
                
                Git is a cultural miracle, and maybe it wouldn't have got its
                early traction if it had been overly prescriptive or
                proscriptive, but more focus on those workflows earlier on
                would have changed history.
       
            skydhash wrote 1 day ago:
            The Pro Git book is available online for free
            
  HTML      [1]: https://git-scm.com/book/en/v2
       
              kevmo314 wrote 1 day ago:
              This sort of thing is part of the problem. If it takes reading
              such a long manual to understand how to properly use Git, it's no
              wonder everyone's workflow is different.
       
                kragen wrote 1 day ago:
                I don't see it as a problem that everyone's workflow is
                different, and, separately, I don't see it as a problem that it
                takes reading such a long manual to understand all the
                possibilities of Git.  There is no royal road to geometry.  Pro
                Git is a lot shorter than the textbook I learned calculus from.
                
                Unlike calculus, though, you can learn enough about Git to use
                it usefully in ten minutes.  Maybe this sets people up for
                disappointment when they find out that afterwards their
                progress isn't that fast.
       
                  symbogra wrote 23 hours 39 min ago:
                  Agreed. Back when I first came across git in 2009 I had to
                  re-read the porcelain manual 3 times before I really got it,
                  but then the conceptual understanding has been useful ever
                  since. I have often the guy explaining git to newbies on my
                  team.
       
                  __MatrixMan__ wrote 1 day ago:
                  Agreed.  I'd read the manual if there was something I needed
                  from it, but everything is working fine.  Yeah I might've
                  rsynced between some local folders once or twice when I
                  could've used git, maybe that was an inelegant approach, but
                  the marginal cost of that blunder was... about as much time
                  I've spent in this thread so whatever.
       
                    skydhash wrote 1 day ago:
                    The nice thing about knowing more about git is that it
                    unlocks another dimension in editing code. It’s a very
                    powerful version of undo-redo, aka time travelling. Then
                    you start to think in term of changes and patches.
                    
                    Ane example of that is the suckless philosophy where extra
                    features comes as patches and diff.
       
                PaulDavisThe1st wrote 1 day ago:
                Do you know any construction folks?
       
          dtgriscom wrote 1 day ago:
          I actually realized this last week, and have yet to try it.
          Programming for almost fifty years, using Git for thirteen years, and
          not an idiot (although there are those who would dispute this,
          including at times my spouse).
       
          tuwtuwtuwtuw wrote 1 day ago:
          I would be surprised if more than 10% of git users know that. Would
          be equally surprised if more than 20% of git users know how to use
          ssh.
          
          I think your age isn't the issue, but I suspect you're in a bubble.
       
            kleiba wrote 1 day ago:
            The non-Windows bubble?
       
          isodev wrote 1 day ago:
          I imagine larger part of the developer community does not, in fact,
          know that GitHub is not git and one can get everything they need
          without feeding their code to Microsoft's AI empire. Just another
          "Embrace, extend, and extinguish"
       
          bluedino wrote 1 day ago:
          Considering git is one of those things barely anyone knows how to
          actually use, yes
       
            seba_dos1 wrote 1 day ago:
            My theory is that git is just so easy to use without understanding
            it that you end up with lots of people using it without
            understanding it :)
       
              BoiledCabbage wrote 1 day ago:
              It's not "so easy to use without understanding it", it's the
              opposite it has so much unnecessary complexity (on top of a
              brilliant simple idea btw), that once people learn how to do what
              they need, they stop trying to learn any more from the pile of
              weirdness that is git.
              
              Decades from now, git will be looked back at in a similar but
              worse version of the way SQL often is -- a terrible interface
              over a wonderful idea.
       
                seba_dos1 wrote 1 day ago:
                I don't think that's true. In my experience it takes time, but
                once it clicks, it clicks. Sure, there is a bunch of weirdness
                in there as well, but that starts way further than where your
                typical git user stops.
                
                I don't think git would end up this popular if it didn't allow
                to be used in a basic way by just memorizing a few commands
                without having to understand its repository model (however
                simple) well.
       
          candiddevmike wrote 1 day ago:
          What are some fun/creative ways to do GitHub/GitLab style CI/CD with
          this method?  Some kind of entry point script on push that determines
          what to do next? How could you decide some kind of variables like
          what the push was for?
       
            CGamesPlay wrote 1 day ago:
            Check the docs for the post-receive hook, it does give everything
            you need. I don't know what you have in mind by "GitHub/Gitlab
            style", but it's just a shell script, and you can add in as much
            yaml as you want to feel good about it.
            
            I did a quick search for "post-receive hook ci" and found this one:
            
  HTML      [1]: https://gist.github.com/nonbeing/f3441c96d8577a734fa240039...
       
            skydhash wrote 1 day ago:
            `man 5 githooks` is your friend. Hooks are just scripts and they
            can receive parameters. `post-receive` is most likely what you
            would want.
       
            yule wrote 1 day ago:
            I wrote about that idea here:
            
  HTML      [1]: https://www.stchris.net/tiny-ci-system.html
       
          t_mahmood wrote 2 days ago:
          My way used to be in the past, put bare repos on Dropbox, clone the
          bare repo to a real path. Done.
          
          That way, I
          
          1. didn't have to worry about sync conflicts. Once complete, just
          push to origin 
          2. had my code backed up outside my computer
          
          I can't exactly remember, if it saves space. I assumed it does, but
          not sure anymore. But I feel it was quite reliable.
          
          I gave that way up with GitHub. But thinking of migrating to
          `Codeberg`
          
          With `tailscale`, I feel we have so much options now, instead of
          putting our personal computer out on the Internet.
       
            crazygringo wrote 1 day ago:
            That doesn't work -- I've tried it.
            
            I mean, it works fine for a few days or weeks, but then it gets
            corrupted. Doesn't matter if you use Dropbox, Google Drive,
            OneDrive, whatever.
            
            It's apparently something to do with the many hundreds of file
            operations git does in a basic operation, and somehow none of the
            sync implementations can quite handle it all 100.0000% correctly.
            I'm personally mystified as to why not, but can attest from
            personal experience (as many people can) that it will get
            corrupted. I've heard theories that somehow the file operations get
            applied out of order somewhere in the pipeline.
       
              t_mahmood wrote 28 min ago:
              Interesting, so, I will have to keep this in mind if I ever want
              to do it again.
              
              What about SyncThing?
       
              WorldMaker wrote 21 hours 15 min ago:
              Among other reasons, the sync engines of these cloud stores all
              have "conflict detection" algorithms when multiple machines touch
              the same files, and while a bare repo avoids conflicts in the
              worktree by not having a worktree, there are still a lot of files
              that get touched in every git push: the refs, some of the pack
              files, etc.
              
              When it is a file conflict the sync engines will often drop
              multiple copies with names like "file (1)" and "file (2)" and so
              forth. It's sometimes possible to surgically fix a git repo in
              that state by figuring out which files need to be "file" or "file
              (1)" or "file (2)" or whatever, but it is not fun.
              
              In theory, a loose objects-only bare repo with `git gc` disabled
              is more append-only and might be useful in file sync engines like
              that, but in practice a loose-objects-only bare repo with no `git
              gc` is not a great experience and certainly not recommended. It's
              probably better to use something like `git bundle` files in a
              sync engine context to avoid conflicts. I wonder if anyone has
              built a useful automation for that.
       
              ics wrote 1 day ago:
              With bare repos? I was bit by this a few years ago when work
              switched to "everything on OneDrive" and it seemed fine until one
              day it wasn't. Following that I did tests with Dropbox and iCloud
              to find that all could corrupt a regular repo very easily. In the
              past few months though I've been trying it again with bare repos
              on iCloud and not had an issue... yet.
       
                crazygringo wrote 1 day ago:
                I decided to try again just a couple of months ago on Google
                Drive for Desktop with a bare repo. Got corrupted on literally
                the third push.
                
                Good luck with iCloud!
       
              kragen wrote 1 day ago:
              I've had a lot of success with using whatever.    A lot of
              whatevers can quite handle many hundreds of file operations
              100.0000% correctly.
       
                crazygringo wrote 1 day ago:
                Ha. I guess for me the other whatever is iCloud, but that
                corrupts too.
                
                But have you ever found a cloud sync tool that doesn't
                eventually corrupt with git? I'm not aware of one existing, and
                I've looked.
                
                Again, to be clear, I'm not talking about the occasional rsync,
                but rather an always-on tool that tries to sync changes as they
                happen.
       
                  kragen wrote 1 day ago:
                  I was thinking of my pendrive, but apparently [1] it also
                  works on NFS.
                  
  HTML            [1]: https://news.ycombinator.com/item?id=45714245
       
          kace91 wrote 2 days ago:
          I didn’t know either - or rather, I had never stopped to consider
          what a server needs to do to expose a git repo.
          
          But more importantly, I’m not sure why I would want to deploy
          something by pushing changes to the server. In my mental model the
          repo contains the SOT, and whatever’s running on the server is
          ephemeral, so I don’t want to mix those two things.
          
          I guess it’s more comfortable than scp-ing individual files for a
          hotfix, but how does this beat pushing to the SOT, sshing into the
          server and pulling changes from there?
       
            skydhash wrote 1 day ago:
            There's a lot of configuration possible due to the fact that git is
            decentralized. I have a copy on my computer which is where I do
            work. Another on a vps for backup. Then one on the app server which
            only tracks the `prod` branch. The latter is actually bare, but
            there's a worktree for the app itself. The worktree is updated via
            a post-receive hook and I deploy change via a simple `git push
            server prod`
       
              kace91 wrote 1 day ago:
              You actually led me into a dive to learn what worktrees are, how
              bare repos + worktrees behave differently from a regular clone
              and how a repo behaves when it’s at the receiving end of a
              push, so thanks for that!
              
              I’ve never worked with decentralized repos, patches and the
              like. I think it’s a good moment to grab a book and relearn git
              beyond shallow usage - and I suspect its interface is a bit too
              leaky to grok it without understanding the way it works under the
              hood.
       
          politelemon wrote 2 days ago:
          Also relatively unknown: You can clone from a directory. It won't
          accomplish the backup feature but it's another option/feature.
       
            superdisk wrote 1 day ago:
            If you do it over NFS or whatever then you can collaborate as well.
       
              mvanbaak wrote 1 day ago:
              git over nfs ... not the very best idea.
       
                kragen wrote 1 day ago:
                I haven't tried it, but I think it's fine if only one person
                has write access to any given clone.  You can pull back and
                forth between clones freely.  It's if you have two Git clients
                trying to write to the same repo that you'll have problems.
       
                  pavon wrote 1 day ago:
                  I've put private working copies on NFS and CIFS. NFS worked
                  pretty well (which probably speaks as much to the admins as
                  the tech). Samba mounts on the other hand had all sorts of
                  problems with time stamps that confused not only git, but the
                  build system as well.
       
                  kbolino wrote 1 day ago:
                  Shared write access to the same git repo directory can be
                  done sanely, but you have to get a number of things right
                  (same group for all users, everything group writable, sticky
                  bit on directories, set config core.sharedRepository=group):
                  
  HTML            [1]: https://stackoverflow.com/a/29646155
       
                    kragen wrote 1 day ago:
                    Yes, when you're not on NFS.  Maybe it works on NFS but I
                    wouldn't bet my project on it.    Locking reliably on NFS is
                    easy to get wrong, and it's a comparatively little-tested
                    scenario now compared to 30 years ago.    (You'll notice that
                    the question doesn't even mention the possibility of NFS.) 
                    Fortunately at least with Git it's easy to have lots of
                    backups.
       
                      superdisk wrote 1 day ago:
                      For what it's worth they do call it out in the manual as
                      a common thing to do: [1] Granted I've never tried it so
                      take it with a grain of salt.
                      
  HTML                [1]: https://git-scm.com/book/ms/v2/Git-on-the-Server...
       
                        kragen wrote 1 day ago:
                        That makes me a little less suspicious, and of course
                        the Git developers are well aware of things like
                        concurrency problems and filesystem limitations.  But
                        I'd still regard it as an area to be wary of.  But only
                        if two clients might be writing to the same repo
                        concurrently.
       
          ryandv wrote 2 days ago:
          Filesystems and folders are foreign and elusive concepts to gen Z.
          
  HTML    [1]: https://www.theverge.com/22684730/students-file-folder-direc...
       
            andai wrote 1 day ago:
            It gets better than that...
            
  HTML      [1]: https://www.youtube.com/shorts/D1dv39-ekBM
       
              Izkata wrote 1 day ago:
              A tip, you can put the hash in a regular youtube URL to get the
              full interface instead of the limited shorts one:
              
  HTML        [1]: https://www.youtube.com/watch?v=D1dv39-ekBM
       
                andai wrote 23 hours 51 min ago:
                Thanks. You can also replace /shorts/ with /watch/ and get the
                same result.
                
  HTML          [1]: https://youtube.com/watch/D1dv39-ekBM
       
            liveoneggs wrote 1 day ago:
            I have encountered it in real life many times. These days I try to
            give jr's extra space to expose their gaps in things I previously
            assumed were baseline fundamentals - directories and files, tgz/zip
            files, etc
       
          brucehoult wrote 2 days ago:
          Or just put the repo in a shared directory in a high-trust group of
          developers (or just yourself).
       
          liveoneggs wrote 2 days ago:
          most people think git = github
       
            ruguo wrote 1 day ago:
            Looks like they’re not developers after all
       
            setopt wrote 2 days ago:
            Yup. I’ve heard several people say that Git is a product from
            Microsoft…
       
              littlecranky67 wrote 2 days ago:
              The irony, when you realize that Linus Torvalds created git.
       
                mrweasel wrote 1 day ago:
                There's an interview with Torvalds where he states that his
                daughter told him that in the computer lab at her college Linus
                is more known for Git than Linux
                
                Clip from the interview:
                
  HTML          [1]: https://www.youtube.com/shorts/0wLidyXzFk8
       
              devsda wrote 2 days ago:
              I wouldn't mind it if those people are from non-tech background.
              
              Now, if it is a growing misconception among cs students or anyone
              doing software development or operations, that's a cause for
              concern.
       
            bfkwlfkjf wrote 2 days ago:
            Nonsense...
            
            Even someone who knows that git isn't GitHub might not be aware
            that ssh is enough to use git remotely. That's actually the case
            for me! I'm a HUGE fan of git, I mildly dislike GitHub, and I never
            knew that ssh was enough to push to a remote repo. Like, how does
            it even work, I don't need a server? I suspect this is due to my
            poor understanding of ssh, not my poor understand of git.
       
              Joeboy wrote 1 day ago:
              I mean, you do need an ssh server. Basically ssh can run commands
              on the remote machine. Most commonly the command would be a
              shell, but it can also be git commands.
       
                bfkwlfkjf wrote 1 day ago:
                Yup! Someone else just pointed it out. Brilliant, thank you!
       
              skydhash wrote 2 days ago:
              Git is distributed, meaning every copy is isolated and does not
              depends on other's copy. Adding remotes to an instance is mostly
              giving a name to an URL(URI?) for the fetch, pull, push
              operation, which exchange commits. As Commits are immutable and
              forms a chain, it's easy to know when two nodes diverge and
              conflict resolution can take place.
              
              From the git-fetch(1) manual page:
              
              > Git supports ssh, git, http, and https protocols (in addition,
              ftp and ftps can be used for fetching, but this is inefficient
              and deprecated; do not use them).
              
              You only need access to the other node repo information. There's
              no server. You can also use a simple path and store the other
              repo on drive.
       
                rpcope1 wrote 1 day ago:
                Sort of related, but given that FTP is supported, I wonder how
                much work it would take to use telnet as a transport.
       
                  chuckadams wrote 1 day ago:
                  Doable.  It would basically be ssh but without encryption.
                  You'd have to bang out login by hand, but
                  "username\npassword\n" will probably work, might need a sleep
                  inbetween, and of course you'll have to detect successful
                  login too.  Oh, and every 0xff byte will have to be escaped
                  with another 0xff
                  
                  At that point, may as well support raw serial too.
                  
                  Supporting rlogin on the other hand is probably as simple as
                  GIT_SSH=rlogin
       
                bfkwlfkjf wrote 1 day ago:
                > There's no server.
                
                There IS a server, it's the ssh daemon. That's the bit I had
                never thought about until now.
       
                  charles_f wrote 1 day ago:
                  The server is git itself, it contains two commands called
                  git-receive-pack and git-upload-pack that it starts through
                  ssh and communicate through stdin/out
       
                  ogig wrote 1 day ago:
                  You can think of the ssh://server/folder as a normal /folder.
                  ssh provides auth and encryption to a remote hosted folder
                  but you can forget about it for the purpose of understanding
                  the nature of git model.
       
                  skydhash wrote 1 day ago:
                  SSH is just a transport to get access to the repo
                  information. The particular implementation does not matter (I
                  think). Its configuration is orthogonal to git.
       
                    bfkwlfkjf wrote 1 day ago:
                    It's just a transport, and it needs a server.
       
              liveoneggs wrote 2 days ago:
              Did you know that you can use git locally? It works just like
              that because ssh is a remote shell.
              
              Read [1] Anyway it sounds like you have a lot of holes in your
              git-knowledge and should read some man pages
              
  HTML        [1]: https://git-scm.com/docs/git-init#Documentation/git-init...
       
                bfkwlfkjf wrote 1 day ago:
                No, you're wrong. These are holes in my ssh knowledge, and your
                comment makes me think you have the same holes.
       
                  liveoneggs wrote 1 day ago:
                  lol you sound like a treat, dude
                  
  HTML            [1]: https://git-scm.com/book/ms/v2/Git-on-the-Server-The...
       
              porridgeraisin wrote 2 days ago:
              > I don't need a server?
              
              You do, an SSH server needs to be running on the  remote if you
              want to ssh into it, using your ssh client - the `ssh` command on
              your laptop. It's just not a http server is all.
              
              You start that server using the `sshd` [systemd] service. On VPSs
              it's enabled by default.
              
              Git supports both http and ssh as the "transport method". So, you
              can use either. Browsers OTOH only support http.
       
                bfkwlfkjf wrote 1 day ago:
                Exactly! Only now the dots connected. Thank you!!
                
                Edit: hey this is really exciting. For a long time one of the
                reasons I've loved git (not GitHub) is the elegance of being a
                piece of software which is decentralized and actually works
                well. But I'd never actually used the decentralized aspect of
                it, I've always had a local repo and then defaulted to use
                GitHub, bitbucket or whatever instead, because I always thought
                I'd need to install some "git daemon" in order to achieve this
                and I couldn't be bothered. But now, this is so much more
                powerful. Linus Torvalds best programmer alive, change my mind.
       
                  Dylan16807 wrote 1 day ago:
                  And in general, any daemon that manipulates files can be
                  replaced by ssh (or ftp) access and a local program.
                  
                  And most things are files.
       
                    tsimionescu wrote 1 day ago:
                    BTW, a nice example of this general concept is Emacs' TRAMP
                    mode. This is a mode where you can open and manipulate
                    files (and other things) on remote systems simply by typing
                    a remote path in Emacs. Emacs will then simply run ssh/scp
                    to expose or modify the contents of those files, and of
                    course to run any required commands, such as deleting a
                    file.
       
            albert_e wrote 2 days ago:
            IT support and cybersecurity teams responsible for blocking and
            enforcing network access restriction to "github.com" ... blocked a
            user request to install "git" locally citing the former policy. The
            organization in question does IT services, software development,
            and maintenance as their main business.
       
              watwut wrote 1 day ago:
              Sometimes I feel like IT and security people compete on how to
              make the work least possible.
              
              These guys won.
       
              jiggawatts wrote 1 day ago:
              That’s… special.
       
            halJordan wrote 2 days ago:
            I always found these sort of categorical denigrations to be off
            base. If most people do think git = github then that's because they
            were taught it by somebody. A lot of somebodies for "most people".
            Likely by the same somebodies who also come to places like this. It
            has always taken a village to raise a child and that is just as
            true for an infant as a junior programmer. But instead we live in a
            sad world of "why didn't schools teach person x"
       
              Dylan16807 wrote 1 day ago:
              What makes you say that people complain about the spread of
              incorrect knowledge "instead" of teaching?  Because there's
              nothing wrong with doing both.
       
              liveoneggs wrote 2 days ago:
              they were taught by github
       
              loloquwowndueo wrote 2 days ago:
              Despite your indignation, the observation that most people think
              GitHub is git is entirely true. Do you point it out when you spot
              someone having that mistaken belief? I do.
       
              blueflow wrote 2 days ago:
              Humans are fallible, relying on hearsay is unprofessional, learn
              your craft properly.
              
              Imagine what the equivalent argumentation for a lawyer or nurse
              would be. Those rules ought to apply for engineers, too.
       
       
   DIR <- back to front page