commented: Author describes the symptoms, finds the cause, fixes the problem, and says how to avoid it in the future. Nice. commented: then apparently what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated. [...] Apparently, when you run ps aux on a Docker host, you see processes from all containers because they share the same kernel. But those processes are in their own mount namespace - they can’t see or touch the host filesystem. This is a Linux behavior and other container runtimes will also exhibit this. The pid namespace is hierarchical; from a parent namespace you can see all the processes present in the child namespaces. The path shown in comes from the path the application sees itself (in a container this is a separate mount namespace and after pivot_root was called to root it properly in the container's filesystem). commented: I just experienced this yesterday on my VPS! I was showcasing my Coolify instance to my coworker and noticed my CPU usage was 100% across all cores. I saw a program called javao and xmrig running in that same path /tmp/.XIN-unix/. It is self-replicating in memory. The culprit was also my umami container which depends on NextJS, via this CVE. My Umami instance was not up-to-date with the latest version that does not have this attack vector. Fun stuff! commented: One thing I find useful is to limit my publicly accessible ports. You can still access other things by using SSH port forwarding. If he bound the vulnerable system only to a localhost address it wouldn't have been accessible from the open web. commented: SSH forwarding is finicky to use, especially on mobile. I prefer a VPN, with the sockets listening explicitly only on localhost and the VPN interface. commented: Everybody should be doing this! So many things aren't exploitable if the attacker doesn't have access in the first place. commented: Blog post heavily written by an LLM (presumably from concise notes). Please don't do that. Does your analytics thing need to be publicly exposed? The best way to avoid this is just not publicly hosting things, need to access your personal stuff from anywhere in the world? Use a tunnel. I am currently working towards the only publicly exposed ports on my internet facing servers being HTTP, HTTPS, SMTP (for incoming mail, not for submission), DNS, and wireguard. Things exposed over HTTP would be strictly internet facing things like my blog, and my git repositories. Anything only I or my fiancée use would then only be available via a VPN (yes, even when you're in the house, makes it so you don't need 15 WiFi networks, you just put your IOT shit on one, your guests on another, and everything you marginally trust on the third one, and even then the only way to access the actually important stuff is through a VPN (which will work from any network, or the internet, through careful routing). commented: Blog post heavily written by an LLM (presumably from concise notes). Please don't do that. I don't read this as being written by LLM? Also the author has made it clear they were helped to do forensics by Claude which is probably good. commented: I don't read this as being written by LLM? Several sections definitely smelled like LLM output to me. commented: When the post was on the orange site the other day the author has confirmed that it's slop. People have found some obvious hallucinations in it too. commented: Nice post. Time and again, we see the risks of exposing endpoints to the Intenet. You could consider adding mature auth to all entrypoints, e.g. a HTTP reverse proxy that protects all your sites with HTTP basic auth or oauth2. or Tailscale or CloudFlare Tunnels or... By limiting the number of entrypoints, you limit the attack surface. setting up fail2ban I don't think this is worth it, especially with OpenSSH now blocking repeat offenders (see PerSourcePenalties). commented: Nice! I didn’t know that the PerSourcePenalties option existed. I currently use blacklistd on FreeBSD (I think it was renamed to blocklistd on FreeBSD 15). Is there then any good and valid reason to use either fail2ban or blocklistd when one can setup something similar with this OpenSSH configuration option? commented: It works for other services? commented: OpenSSH will do this by default, so there's no setup needed. I expect there are fail2ban or blocklistd usecases that I'm not aware of. commented: I dunno about all this random Docker stuff. I get that containerization helped in this case, but if they’d been running standard OS packages and followed the OS’s security mailing list, they’d been able to see that Umami got an important update. No beed to know exactly what packages it depends on. Alternatively, keep an eye on every project’s announcements for what you’re running. commented: Container isolation helped at runtime. OS packages would't have helped with the fact that Umami had already patched the CVE but author had not updated their container. Better analogy would be systemd isolation (or jails or something similar, I have only ever used systemd). commented: Container isolation helped at runtime. OS packages would't have helped with the fact that Umami had already patched the CVE but author had not updated their container. It's unclear whether container isolation helped at runtime beyond just running the daemon as a separate user. commented: Wouldn't you agree that container isolation also helped at runtime by causing/forcing the exploit to create its local files inside the container filesystem and not on the host filesystem? At a minimum, this means that any update to the container would have removed all the exploit's files in the container's /tmp directory in a way that would not happen in an OS package deployment. This means also that the exploit code was unable to access host system configs in /etc/, for example or any other sensitive files readable by UID 1001. It's easy to imagine the many ways this may have thwarted a broader exploit on the host and local network. .