On the Corporate DDoS Attack I usually deign not to comment on that which I see appearing too often in the computer commentators' collective consciousness; however, I wish to write more and find the topic to be interesting enough. I've long wondered how certain websites keep themselves afloat, above the constant waves of attacks, beyond those which simply overwhelm attackers with sickening amounts of wasted resources; amusingly, the same corporations that defend themselves through such waste now perpetrate these attacks against the smaller websites. Since Bitcoin can't be taken down without global coordination, it's common to see people repeating their programming about ``climate change'' and other such lies as one reason to destroy Bitcoin through social means; naturally, because corporations believe themselves most likely to benefit from copyright laundering through neural network nonsense, such arguments magically cease to apply when energy is used towards that effort. A future freed from control by incestuous bankers is supposedly bad, and a future freed from copyright except for corporations is apparently both fine and dandy. Regardless, it's amusing to me as I watch ``webmasters'' bemoan their websites unable to deal with an onslaught of worthless requests by an army of machines; I'd truly figured that, surely, they'd already solved such problems, and that the occasional fool bemoaning a ``DDoS attack'' of not even one hundred requests per second to be some rarity. I know now this situation to be much worse. My website certainly could be optimized further; I'd like to write a custom HTTP server, but turning a program inside out to suit the stupid TCP mechanisms required for decent performance is a horrible thought. I currently use the Apache2 server, which parses and interprets its configuration at every request with how I've customized it. What matters most of all is the small amount of work needed to serve any particular request. My website dynamically generates nothing except headers, uses no fake and expensive encryption, and most pages reference no other URLs needed to display them. My website could in all likelihood be taken down trivially by an attack exploiting a flaw in TCP, but I've seen none of the conversation focus on such flaws; instead, the entire conversation concerns a mere large number of requests, mostly with regards to how they interact badly with expensive and dynamic pages. It's deeply amusing to see how so many websites couldn't handle the traffic sent to them in reality. As in everything to do with the WWW, the current plan for many is to make it even worse by requiring JavaScript to run proof-of-work programs before access to pages be granted. It would be too nice if a simpler mechanism were used instead, although HTTP isn't much expandable in any useful way anyway. Massively complicated WWW browsers could've gained such basic abilities years ago, but their masters benefit from their absence, and some businesses depend entirely on preventing such simple solutions. Proof-of-work is, regardless, a bandage over a gaping wound in this scenario, especially considering the willingness of corporations to waste resources whenever someone else pays for them; the one true solution is making the pages less expensive to send, which will likely result in cancelled services. It's rather interesting to see this happen to the WWW, since I mistook all of this for common sense. .