URI: 
        _______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
  HTML Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
  HTML   Show HN: OpenWorkers – Self-hosted Cloudflare workers in Rust
       
       
        rcarmo wrote 11 hours 2 min ago:
        Nice. An obvious link to the compose file would be great (I assume you
        have prebuilt images on ghcr.io?), and if it happens to work on ARM, I
        will certainly give it a try.
       
        brainless wrote 14 hours 28 min ago:
        I am always interested in seeing alternatives in the edge compute space
        but self hosting does not make sense to me.
        
        The benefit of edge is the availability close to customers. Unless I
        run many servers, it is simply easier to run one server instead of
        "edge".
       
        willtemperley wrote 15 hours 3 min ago:
        I've decided to ditch CF because Wrangler is deployed via NPM and I
        cannot bear NodeJS and Microsoft NPM anymore.
        
        I get the impression this can't be run without NodeJS right now?
       
          max_lt wrote 13 hours 15 min ago:
          Conclusion shared. No Node required — the runtime is pure Rust +
          V8. The only transformation we do is transpilation for TS code.
       
        mariopt wrote 15 hours 45 min ago:
        Amazing work!
        
        I have been thinking exactly about this. CF Workers are nice but the
        vendor lock-in is a massive issue mid to long term.
        Bringing D1 makes a lot of sense for web apps via libSql (SQLite with
        read/write replicas).
        
        Do you intended to work with the current wrangler file format?
        Does this currently work with Hono.js with the cloudflare connector>
       
          max_lt wrote 9 hours 33 min ago:
          Wrangler file format: not planned. We're taking a different approach
          for config but we intend to be compatible with Cloudflare adapters
          (SvelteKit, Astro, etc). Assets binding already has the same API. We
          just need to support _routes.json and add static file routing on top
          of workers, data model is ready for it.
          
          For D1: our DB binding is Postgres-based, so the API differs
          slightly. Same idea, different backend.
          
          Hono should just work, it just needs a manual build step and copy
          paste for now. We will soon host OpenWorkers dashboard and API (Hono)
          directly on the runner (just some plumbing to do at this point).
       
            mariopt wrote 5 hours 18 min ago:
            I think it would be worth it to keep the D1 compatibility, Sqlite
            and Postgres have different SQL dialects. Cloudflare has Hyperdrive
            to keep the connection alive to Postgres/other dbs, what
            D1/libSql/Turso brings to the table is the ability to run a
            read/write replica in the machine, this can dramatically reduce the
            latency.
       
        IntelliAvatar wrote 18 hours 9 min ago:
        Nice project.
        
        One thing Cloudflare Workers gets right is strong execution isolation.
        When self-hosting, what’s the failure model if user code misbehaves?
        Is there any runtime-level guardrail or tracing for side-effects?
        
        Asking because execution is usually where things go sideways.
       
          max_lt wrote 13 hours 5 min ago:
          Workers that hit limits (CPU, memory, wall-clock) get terminated
          cleanly with a clear reason. Exceptions are caught with stack traces
          (at least it should lol), logs stream in real-time.
          
          What's next: execution recording. Every invocation captures a trace:
          request, binding calls, timing. Replay locally or hand it to an AI
          debugger. No more "works on my machine".
          
          I think the CLI will look like:
          
          # Replay a recorded execution:
          
          openworkers replay --execution-id abc123
          
          # Replay with updated code, compare behavior:
          
          openworkers replay --execution-id abc123 --worker ./dist/my-fix.js
          
          Production bug -> replay -> AI fix -> verified -> deployed. That's
          what I have in mind.
       
            IntelliAvatar wrote 9 hours 46 min ago:
            This makes a lot of sense. Recording execution + replay is exactly
            what’s missing once you move past simple logging.
            
            One thing I’ve found tricky in similar setups is making sure the
            trace is captured before side-effects happen, otherwise replay can
            lie to you. If you get that boundary right, the prod → replay →
            fix → verify loop becomes much more reliable.
            
            Really like the direction.
       
        valdair3d wrote 18 hours 39 min ago:
        Self-hosted workers are becoming critical infrastructure for AI agent
        workloads. When you're running agents that need to interact with
        external services - web scraping, API orchestration, browser automation
        - you hit Cloudflare's execution limits fast. The 30s CPU time on the
        free tier and even the 15min on paid plans don't work for long-running
        agent tasks.
        
        The isolation model here is interesting. For agents that need to handle
        untrusted input (processing user URLs, parsing arbitrary documents), V8
        isolates give you a security boundary that's much lighter than full
        container isolation. But you trade off the ability to do things like
        spawn subprocesses or access the filesystem.
        
        Curious about the persistence story. Most agent workflows need some
        form of state between invocations - conversation history, task
        progress, cached auth tokens. Is there a built-in KV store or does this
        expect external storage?
       
          max_lt wrote 13 hours 12 min ago:
          Good use case. For state between invocations, we have KV (key-value
          with TTL), Storage (S3) and DB bindings (Postgres). Durable Objects
          not yet but it's on the roadmap.
          
          Wall-clock timeout is configurable (default 30s), CPU limits too. We
          haven't prioritized long-running tasks or WebSockets yet, but
          shouldn't be hard to add.
       
        keepamovin wrote 22 hours 20 min ago:
        Technically, and architecturally this is excellent. It’s also an
        excellent product idea. And I’m particularly a fan of the
        big-ass-vendor-inversion-model where instead of the big ass vendor
        ripping off an open source project and monetizing it, you look at one
        of their projects and you rip it off inversely and open source it —
        this is the way.
       
        mmastrac wrote 23 hours 1 min ago:
        I did a huge chunk of work to split deno_core from deno a few years
        back and TBH I don't blame you from moving to raw rusty_v8. There was a
        _lot_ of legacy code in deno_core that was challenging to remove
        because touching a lot of the code would break random downstream tests
        in deno constantly.
       
          max_lt wrote 22 hours 37 min ago:
          Thanks for that work! deno_core is a beautiful piece of work and is
          still an option for OpenWorkers: [1] We maintained it until we
          introduced bindings — at that point, we wanted more fine-grained
          control over the runtime internals, so we moved to raw rusty_v8 to
          iterate faster. We'll probably circle back and add the missing pieces
          to the deno runtime at some point.
          
  HTML    [1]: https://github.com/openworkers/openworkers-runtime-deno
       
        TZubiri wrote 23 hours 28 min ago:
        
        
  HTML  [1]: https://imgflip.com/i/agah04
       
          TZubiri wrote 23 hours 27 min ago:
          
          
  HTML    [1]: https://imgflip.com/i/agah2y
       
        utopiah wrote 1 day ago:
        DX?
        
        I'm quite ignorant on the topic (as I never saw the appeal of
        Cloudflare workers, not due to technical problems but solely because of
        centralization) but what does DX in "goal has always been the same: run
        JavaScript on your own servers, with the same DX as Cloudflare Workers
        but without vendor lock-in." mean? Looks like a runtime or environment
        but looking at [1] I also don't see it.
        
        Anyway if the "DX" is a kind of runtime, in which actual contexts is it
        better than the incumbents, e.g. Node, or the newer ones e.g. Deno or
        Zig or even more broadly WASI?
        
  HTML  [1]: https://github.com/drzo/workerd
       
          locknitpicker wrote 1 day ago:
          > Anyway if the "DX" is a kind of runtime, in which actual contexts
          is it better than the incumbents, e.g. Node, or the newer ones e.g.
          Deno or Zig or even more broadly WASI?
          
          I'm not the blogger, I'm just a developer who works professionally
          with Cloudflare Workers. To me the main value proposition is avoiding
          vendor lock-in, and even so the logic doesn't seem to be there.
          
          The main value proposition of Cloudflare Workers is being able to
          deploy workers at the edge and use them to implement edge use cases.
          Meaning, custom cache logic, perhaps some pauthorization work,
          request transformation and aggregation, etc. If you remove the global
          edge network and cache, you do not have any compelling reason to look
          at this.
          
          It's also perplexing how the sales pitch is Rust+WASM. This
          completely defeats the whole purpose of Cloudflare Workers. The whole
          point of using workers is to have very fast isolates handling
          IO-heavy workloads where they stay idling the majority of the time so
          that the same isolate instance can handle a high volume of requests.
          WASM is notorious for eliminating the ability to yield on awaits from
          fetch calls, and is only compelling if your argument is a
          lift-and-shift usecase. Which this ain't it.
       
            utopiah wrote 12 hours 26 min ago:
            Neat, thanks for taking the time to explain!
            
            Indeed "global edge network and cache" is not my interest. I do
            prototyping so I don't care about scale or super low latency
            Worldwide. If I want to share a prototype I put on my small server
            in Germany, share the URL and voila, good enough.
            
            That being said I understand others do, so this helps me understand
            the broader appeal of cloud workers and now some limits of this
            project.
       
          lukevp wrote 1 day ago:
          DX means Developer Experience, they're saying it lets you use the
          same tooling and commands to build the workers as you would if they
          were on CloudFlare.
       
            utopiah wrote 12 hours 30 min ago:
            Thanks for the clarification!
            
            Damn I read "DevEx" before but not DX until today, damn I'm
            outdated!
            
            Anyway, back to vim ;)
       
        abalashov wrote 1 day ago:
        What if we hosted the cloud... on our own computers?
        
        I see we have entered that phase in the ebb and flow of cloud vs.
        self-hosting. I'm seeing lots of echoes of this everywhere, epitomised
        by talks like this:
        
  HTML  [1]: https://youtu.be/tWz4Eqh9USc
       
          nine_k wrote 1 day ago:
          It won't be a... cloud?
          
          To me, the principal differentiator is the elasticity. I start and
          retire instances according to my needs, and only pay for the
          resources I've actually consumed. This is only possible on a very
          large shared pool of resources, where spikes of use even out somehow.
          
          If I host everything myself, the cloud-like deployment tools simplify
          my life, but I still pay the full price for my rented / colocated
          server. This makes sense when my load is reasonably even and
          predictable. This also makes sense when it's my home NAS or media
          server anyway.
          
          (It is similar to using a bus vs owning a van.)
       
            rcarmo wrote 11 hours 0 min ago:
            It will be a very small cloud.
       
          locknitpicker wrote 1 day ago:
          > What if we hosted the cloud... on our own computers?
          
          The value proposition of function-as-a-service offerings is not
          "cloud" buzzwords, but providing an event-handling framework where
          developers can focus on implementing event handlers that are
          triggered by specific events.
          
          FaaS frameworks are the high-level counterpart of the low-pevel
          message brokers+web services/background tasks.
          
          Once you include queues in the list of primitives, durable executions
          are another step in that direction.
          
          If you have any experience developing and maintaining web services,
          you'll understand that API work is largely comprised of writing
          boilerplate code, controller actions, and background tasks. FaaS
          frameworks abstract away the boilerplate work.
       
        theknarf wrote 1 day ago:
        Why would I want this over just sticking Node / Deno / Bun in a Docker
        container?
       
          m11a wrote 1 day ago:
          Node in Docker doesn’t have full isolation and ‘sandbox’
          escapes are possible. V8 is comparatively quite hardened
       
        victorbjorklund wrote 1 day ago:
        Cool. I always liked CF workers but haven’t shipped anything serious
        with it due to not wanting vendor lock-in. This is perfect for knowing
        you always got a escape hatch.
       
        nextaccountic wrote 1 day ago:
        Any reason to abandon Deno?
        
        edit: if the idea was to have compatibility with cloudflare workers,
        workers can run deno
        
  HTML  [1]: https://docs.deno.com/examples/cloudflare_workers_tutorial/
       
          max_lt wrote 1 day ago:
          Deno core is great and I didn't really abandon Deno – we support 5
          runtimes actually, and Deno is the second most advanced one ( [1] ).
          It broke a few weeks ago when I added the new bindings system and I
          haven't had time to fix it yet. Focused on shipping bindings fast
          with the V8 runtime. Will get back to Deno support soon.
          
  HTML    [1]: https://github.com/openworkers/openworkers-runtime-deno
       
        orliesaurus wrote 1 day ago:
        Good to see this! Cloudflare's cool, but those locked-in things (KV,
        D1, etc.) always made it hard to switch. 
        Offering open-source alternatives is always good, but maintainign them
        is on the community. Even without super-secure multi-tenancy, being
        able to run the same code on your own stuff or a small VPS without
        changing the storage is a huge dev experience boost.
       
        bob1029 wrote 1 day ago:
        > It brings the power of edge computing to your own infrastructure.
        
        I like the idea of self-hosting, but it seems fairly strongly opposed
        to the concept of edge computing. The edge is only made possible by big
        ass vendors like Cloudflare. Your own infrastructure is very unlikely
        to have 300+ points of presence on the global web. You can replicate
        this with a heterogeneous fleet of smaller and more "ethical" vendors,
        but also with a lot more effort and downside risk.
       
          closingreunion wrote 23 hours 53 min ago:
          Is some sort of decentralised network of hosts somehow working
          together to challenge the Cloudflare hegemony even plausible? Would
          it be too difficult to coordinate in a safe and reliable way?
       
            geysersam wrote 23 hours 24 min ago:
            If you have a central database, what benefits are you getting from
            edge compute? This is a serious question. As far as I understand
            edge computing is good for reducing latency. If you have to
            communicate with a non-edge database anyway, is there any advantage
            from being on the edge?
       
              csomar wrote 14 hours 11 min ago:
              Databases in Cloudflare are not edge. That is, they are tied to a
              central location. Where workers help is async stateless tasks.
              There are a lot of these (authentication, email, notifications,
              etc.)
       
                h33t-l4x0r wrote 10 hours 29 min ago:
                It has edge replicas though. You're talking about d1, right?
       
              martinald wrote 23 hours 19 min ago:
              Well you can cache stuff and also use read replicas. But yes, you
              are correct. For 'write' it doesn't help as much to say the
              least. But for some (most?) sites they are 99.9% read...
       
          patmorgan23 wrote 1 day ago:
          But do you need 300 pops to benefit from the edge model? Or would 10
          pops in your primary territory be enough.
       
            locknitpicker wrote 1 day ago:
            > But do you need 300 pops to benefit from the edge model? Or would
            10 pops in your primary territory be enough.
            
            I don't think that the number of PoPs is the key factor. The key
            factor is being able to route requests based on a edge-friendly
            criteria (latency, geographical proximity, etc) and automatically
            deploy changes in a way that the system ensures consistency.
            
            This sort of projects do not and cannot address those concerns.
            
            Targeting the SDK and interface is a good hackathon exercise, but
            unless you want to put together a toy runtime to do some local
            testing, this sort of project completely misses the whole reason
            why this sort of technology is used.
       
            nrhrjrjrjtntbt wrote 1 day ago:
            For most applications 1 location is probably good enough.I assume
            HN is single location and I am a lomg way from CA but have no speed
            issues.
            
            Cavaet for high scale sites and game servers. Maybe for image heavy
            sites too (but self hosting then adding a CDN seems like a low lock
            in and low cost option)
       
              locknitpicker wrote 1 day ago:
              > For most applications 1 location is probably good enough.
              
              If your usecase doesn't require redundancy or high-availability,
              why would you be using something like Cloudflare to start with?
       
                nrhrjrjrjtntbt wrote 12 hours 19 min ago:
                It takes a minute to setup for CDN usecase.
       
                max_lt wrote 13 hours 25 min ago:
                The DX is great: simple deployment, no containers, no infra to
                manage. I build a lot of small weekend projects that I don't
                want to maintain once shipped. OpenWorkers gives you the same
                model when you need compliance or data residency.
       
                Hamuko wrote 13 hours 43 min ago:
                Cloudflare gives me free resources. If they tomorrow reduced my
                blog to be available on a single region only, I'd shrug and
                move on with my day.
       
                NicoJuicy wrote 20 hours 45 min ago:
                Price
       
                gpm wrote 21 hours 11 min ago:
                Free bandwidth. (Also the very good sibling-answer about
                tunnels).
       
                RandomDistort wrote 23 hours 8 min ago:
                When you have a simple tool you have written for yourself, that
                you need to be reliable and accessible but also that you don't
                use frequently enough that it's worth the bother of running on
                your own server with all of that setup and ongoing maintenance.
       
                robertcope wrote 23 hours 52 min ago:
                Security. I host personal sites on Linodes and other external
                servers. There are no inbound ports open to the world.
                Everything is accessed via Cloudflare Tunnels and locked down
                via their Zero Trust services. I find this useful and good, as
                I don't really want to have to develop my personal services to
                the point where I'd consider them hardened for public internet
                access.
       
                  h33t-l4x0r wrote 10 hours 36 min ago:
                  Not even ssh? What happens if cloudflare goes down?
       
                    nwellinghoff wrote 4 hours 37 min ago:
                    You could restrict the ssh port by ip as well.
       
                    c0balt wrote 7 hours 6 min ago:
                    Not oc, but services like Linode often offer "console"
                    access via a virtualized tty for VPS systems.
                    
                    Having a local backup user is a viable backup path then. If
                    you wire up pam enough you can even use MFA for local
                    login.
       
            andrewaylett wrote 1 day ago:
            Honestly, for my own stuff I only need one PoP to be close to my
            users.    And I've avoided using Cloudflare because they're too far
            away.
            
            More seriously, I think there's a distinction between "edge-style"
            and actual edge that's important here.    Most of the services I've
            been involved in wouldn't benefit from any kind of edge placement:
            that's not the lowest hanging fruit for performance improvements. 
            But that doesn't mean that the "workers" model wouldn't fit, and
            indeed I suspect that using a workers model would help folk
            architect their stuff in a form that is not only more performant,
            but also more amenable to edge placement.
       
            st3fan wrote 1 day ago:
            many apps are fine on a single server
       
            trevor-e wrote 1 day ago:
            I agree, latency is very important and 300 pops is great, but seems
            more for marketing and would see diminishing returns for the
            majority of applications.
       
        j1elo wrote 1 day ago:
        To the author: The ASCII-art Architecture diagram is very broken, at
        least on my Pixel phone with Firefox.
        
        These kinds of text-based diagrams are appealing for us techies, but in
        the end I learned that they are less practical. My suggestion is to use
        an image, and think of the text-based version as the "source code"
        which you keep, meanwile what gets published is the output of
        "compiling" it into something that is for sure always viewable without
        mistake (that one is where we tend to miss it with ascii-art).
       
          max_lt wrote 1 day ago:
          Thanks for the heads up! Fixed – added a simplified ASCII version
          for mobile.
       
            j1elo wrote 1 day ago:
            Thanks! Now I can make more sense of it! Very cool project by the
            way, thanks for posting it
       
          vishnugupta wrote 1 day ago:
          Rendered perfectly on my iPhone 11 Safari.
       
            simlevesque wrote 1 day ago:
            That's why we need to test websites on multiple browsers.
       
        byyll wrote 1 day ago:
        Isn't the whole point of Cloudflare's Workers to pay per function? If
        it is self-hosted, you must dedicate hardware in advance, even if it's
        rented in the cloud.
       
          shimman wrote 1 day ago:
          Many companies run selfhosted servers in data centers still need to
          run software on top of this. Not every company needs to pay people to
          do things they are capable themselves.
          
          Having options that mimic paid services is a good thing and helps
          with adoptability.
       
        buremba wrote 1 day ago:
        I wonder why V8 is considered as superior compared to WASM for
        sandboxing.
       
          m11a wrote 1 day ago:
          Is WASM’s story for side effects solved yet? eg network calls seems
          too complicated ( [1] etc)
          
  HTML    [1]: https://github.com/vasilev/HTTP-request-from-inside-WASM
       
          skybrian wrote 1 day ago:
          On V8, you can run both JavaScript and WASM.
       
            buremba wrote 1 day ago:
            Theoretically yes, but CF workers or this project doesn't support
            it. Indeed none of the cloud providers support WASM as first-party
            support yet.
       
              max_lt wrote 20 hours 27 min ago:
              CF Workers does support WASM. We do too as V8 handles it
              natively. Tested it, works, just hasn't been polished yet.
       
              justincormack wrote 1 day ago:
              Workers does support wasm
              
  HTML        [1]: https://developers.cloudflare.com/workers/runtime-apis/w...
       
                buremba wrote 1 day ago:
                Maybe it's better now but I wouldn't call this first-class
                support, as you rely on the JS runtime to initialize WASM.
                
                The last time I tried it, the cold start was over 10 seconds,
                making it unusable for any practical use case. Maybe the tech
                is not there but given that WASM guarantees the sandboxing
                already and supports multiple languages, I was hoping we would
                have providers investing in it.
       
              otterley wrote 1 day ago:
              The problem is that there’s not much of a market opportunity
              yet. Customers aren’t voting for WASM with their wallets like
              they are mainstream language runtimes.
       
        tbrockman wrote 1 day ago:
        Cool project, great work!
        
        Forgive the uninformed questions, but given that `workerd` ( [1] ) is
        "open-source" (in terms of the runtime itself, less so the deployment
        model), is the main distinction here that OpenWorkers provides a
        complete environment? Any notable differences between the respective
        runtimes themselves? Is the intention to ever provide a managed
        offering for scalability/enterprise features, or primarily focus on
        enabling self-hosting for DIYers?
        
  HTML  [1]: https://github.com/cloudflare/workerd
       
          max_lt wrote 1 day ago:
          Thanks! Main differences:
            1. Complete stack: workerd is just the runtime. OpenWorkers
          includes the full platform – dashboard, API, scheduler, logs, and
          self-hostable bindings (KV, S3/R2, Postgres).
            2. Runtime: workerd uses Cloudflare's C++ codebase, OpenWorkers is
          Rust + rusty_v8. Simpler, easier to hack on.
            3. Managed offering: Yes, there's already one at
          dash.openworkers.com – free tier available. But self-hosting is a
          first-class citizen.
       
            csomar wrote 11 hours 25 min ago:
            Question: Do you support WASM workers? How does the deployment
            experience compared to Wrangler? If I have a wasm worker and only
            use KV, how identical will be the deployed worker to that of
            Cloudflare?
       
              max_lt wrote 10 hours 28 min ago:
              WASM is supported, V8 handles it natively. Tested it briefly,
              works, but not user-friendly at all yet.
              
              OpenWorkers CLI is in development. We're at the pre-wrangler
              stage honestly. Dashboard or API for now, wrangler-style DX with
              Github/GitLab integration is the goal.
       
        dangoodmanUT wrote 1 day ago:
        This is similar to what rivet (1) does, perhaps focusing more on
        stateless than rivet does
        
        (1)
        
  HTML  [1]: https://www.rivet.dev/docs/actors/
       
        kachapopopow wrote 1 day ago:
        Could you add a kubernetes deployment quick-start? Just a simple
        deployment.yaml is enough.
       
        mohsen1 wrote 1 day ago:
        This is super nice! Thank you for working on this!
        
        Recently really enjoying CloudFlare Workflows (used it in [1] ) and
        would be nice to build Workflows on top of this too.
        
  HTML  [1]: https://mafia-arena.com
       
          max_lt wrote 1 day ago:
          Thanks! Workflows is definitely interesting – it's basically
          durable execution with steps and retries. It's on the radar, probably
          after the CLI and GitHub integration.
       
        strangescript wrote 1 day ago:
        Cool project, but I never found the cloudflare DX desirable compared to
        self hosted alternatives. A plain old node server in a docker container
        was much easier to manage, use and is scalable. Cloudflare's system was
        just a hoop that you needed to jump through to get to the other nice to
        haves in their cloud.
       
          skybrian wrote 1 day ago:
          Would it be useful for testing apps that you're going to deploy on
          Cloudflare anyway?
       
        st3fan wrote 1 day ago:
        This is very nice! Do you plan to hook this up to GitHub, so that a
        push of worker code (and maybe a yaml describing the environment &
        resources) will result in a redeploy?
       
          max_lt wrote 1 day ago:
          Not yet, but it's one of the next big features. I'm currently working
          on the CLI (WIP), and GitHub integration with auto-deploy on push
          will come after that. A yaml config for bindings/cron is definitely
          on the roadmap too.
       
            max_lt wrote 1 day ago:
            I'm also working on execution recording/replay – the idea is to
            capture a deterministic trace of a request, so you can push it as a
            GitHub issue and replay it locally (or let an AI debug it).
       
        kristianpaul wrote 1 day ago:
        Interesting option to consider next to openfaas
       
        kachapopopow wrote 1 day ago:
        I see anything that reduces the relience on vendor lock-in I upvote.
        Hopefully cloud services see mass exodus so they have to have
        reasonable pricing that actually reflects their costs instead of
        charging more than free for basic services like NAT.
        
        Cloud services are actually really nice and convenient if you were to
        ignore the eye watering cost versus DIY.
       
          re-thc wrote 1 day ago:
          > so they have to have reasonable pricing that actually reflects
          their costs instead of charging more than free for basic services
          like NAT
          
          How is the cost of NAT free?
          
          > Cloud services are actually really nice and convenient if you were
          to ignore the eye watering cost versus DIY.
          
          I don't doubt clouds are expensive, but in many countries it'd cost
          more to DIY for a proper business. Running a service isn't just
          running the install command. Having a team to maintain and monitor
          services is already expensive.
       
            nijave wrote 17 hours 22 min ago:
            Presumably they're talking about the egregious price of NAT on AWS.
            
            It's next to free self hosting considering even the crappiest
            consumer router has hardware accelerated NAT and takes a tiny
            amount of power. You likely already have the hardware and power
            since you need routing and potentially other network services
       
              re-thc wrote 7 hours 0 min ago:
              > Presumably they're talking about the egregious price of NAT on
              AWS.
              
              Maybe. I agree AWS is over-priced. However it shouldn't be
              "free".
              
              > It's next to free self hosting considering even the crappiest
              consumer router
              
              That's not the same product / service is it? We're discussing
              networking products and this "crappiest" consumer router wouldn't
              even push real world 100m of packets.
       
            kachapopopow wrote 1 day ago:
            salesforce had their hosting bill jump orders of magnitude after
            ditching their colocation, it did not save anything and colocation
            staff were replaced with AWS engineers
            
            nat is free to provide because the infrastructure to have NAT is
            already there and there is never anything maxing out a switch
            cluster(most switches sit at ~1% usage since they're overspeced
            $1,000,000 switches), so other than host CPU time managing
            interrupts (which is unlikely since all network cards offload
            this).
            
            sure you could argue that regional NAT might should be priced, but
            these companies have so much fiber between their datacenters that
            all of nat usage is probably a rounding error.
       
              pyvpx wrote 15 hours 22 min ago:
              NAT is a stateful network function and incredibly complex to
              implement efficiently. NAT is never free.
       
                kachapopopow wrote 8 hours 5 min ago:
                it's already there and fully supported and accelerated by
                switches and connected hardware, switches like juniper do have
                licensing fees to use such features, but a company like AWS can
                surely work around these licensing costs and build an in-house
                solution.
       
                  re-thc wrote 6 hours 58 min ago:
                  > it's already there
                  
                  So it should be free? The bank already has "money". It's
                  already there so you can take it?
                  
                  That's not how it works.
                  
                  Do you not get a managed service where someone upgrades it,
                  deals with outages etc? Are those people that work 24/7 free
                  or is it another "already there"?
       
                    kachapopopow wrote 5 hours 45 min ago:
                    fair point, but the pricing of NATs is so low that it would
                    actually take more effort to create billing for it than to
                    just have it be free, it's clearly a choice to maximize
                    profits for every single resource regardless of complexity
                    or cost - that is my problem.
                    
                    And there are things that come for free when you have
                    instrastructure this big and expansive - one-time
                    configuration and you either monetize it or pass down the
                    savings and since every cloud service is in agreement that
                    profits should be maximized you end up with cloud providers
                    which have massive datacenters at very cheap cost due to
                    economies of scale providing it at a value far exceeding
                    normal hosting practices due to their ability to monopolize
                    and spend vasts amount of money onboarding businesses with
                    false promises which errodes the infrastructure for
                    non-cloud solutions and makes cloud providers the only
                    choice for any business as the talent and software ends up
                    going into maintenance mode and/or turns towards higher
                    profitability to keep themselves afloat.
       
            otterley wrote 1 day ago:
            They said “charging more than free” - i.e., more than $0, i.e.,
            they’re not free. It was awkwardly worded.
       
              re-thc wrote 1 day ago:
              They said "instead of charging more than free", which means
              should be free.
              
              Please read it again.
       
                otterley wrote 1 day ago:
                I think we’re in violent agreement, but you were ambiguous
                about what “cost” meant. It seems you meant “cost of
                providing NAT” but I interpreted it as “cost to the
                customer.”
                
                > Please read it again.
                
                There’s no need to be rude.
       
          rozenmd wrote 1 day ago:
          Probably worth pointing out that the Cloudflare Workers runtime is
          already open source:
          
  HTML    [1]: https://github.com/cloudflare/workerd
       
            max_lt wrote 1 day ago:
            True, workerd is open source. But the bindings (KV, R2, D1, Queues,
            etc.) aren't – they're Cloudflare's proprietary services.
            OpenWorkers includes open source bindings you can self-host.
       
              buremba wrote 1 day ago:
              I tried to run it locally some time ago, but it's buggy as hell
              when self-hosted. It's not even worth trying out given that CF
              itself doesn't suggest it.
       
                ketanhwr wrote 1 day ago:
                I'm curious what bugs you encountered. workerd does power the
                local runtime when you test CF workers in dev via wrangler, so
                we don't really expect/want it to be buggy..
       
          geek_at wrote 1 day ago:
          I'm worrying that the increasing ram prices will drive more people
          away from local and more to cloud services because if the big
          companies are buying up all the resources it might not be feasible to
          self host in a few years
       
            kachapopopow wrote 1 day ago:
            the pricing is so insane it will always be cheaper to self host by
            100x, that's how bad it is.
       
              dijit wrote 1 day ago:
              not 100x.
              
              10% is the number I ordinarily see, counting for members of staff
              and adequate DR systems.
              
              If we had paid our IT teams half of what we pay a cloud provider,
              we would have had better internal processes.
              
              Instead we starved them and the cloud providers successfully
              weaponised extremely short term thinking against us, now barely
              anyone has the competence to actually manifest those cost
              benefits without serious instability.
       
                kachapopopow wrote 1 day ago:
                I genuinely mean that, fly.io (although as unreliable as it
                might seem) is already around ~5x to 10x cheaper depending on
                use case, depending on some services it's actually  times
                cheaper because it's actually completely free when you self
                host!
                
                GCP pricing is absolutely wicked when they charge $120/month
                for 4vcore 16gb ram, can get around 23 times more performance
                and 192gb ram for $350/month with Xtbps-ish ddos protection.
                
                I have 2 2x7742 1tb ram each, 3 9950x3ds 192gb ecc, 2 7950x3d's
                all at <$600/month obv the original hardware cost in the realm
                of $60k - the epyc cpu's were bought used for around $1k each
                so not a bad deal, same with ram overall the true cost is <20k.
                This is entirely for personal use and will last me more than a
                decade most likely unless there are major gains in efficiency
                and power cost continues to grow due to AI demand. This also
                includes 100tb+ hdd of storage and 40tb of nvme storage all
                connected with 100gbps switch pair for redundancy for a cheap
                cheap price of $500 for each switch.
                
                I guess I owe some links: (Ignore minecraft focused branding)
                [1] (also offers colocation)
                
                telegram: @Erikb_9gigsofram direct colocation at datacenter (no
                middlemen / sales) + good low cost bundle deal
                
                anti-ddos: [2] (might still offer colocation?)
                
                anti-ddos:
                
  HTML          [1]: https://pufferfish.host/
  HTML          [2]: https://cosmicguard.com/
  HTML          [3]: https://tcpshield.com/
       
              Imustaskforhelp wrote 1 day ago:
              Wait what? can you show me some sources to back this up? I assume
              you are exaggerating but still, what would be the definition of
              cheap is interesting to know.
              
              I don't think after the fact that ram prices spiked 4-5x that its
              gonna be cheaper to self host by 100x, Like hetzner's or ovh's
              cloud offerings are cheap
              
              Plus you have to put a lot of money and then still pay for
              something like colocation if you are competing with them
              
              Even if you aren't, I think that the models are different. They
              are models of monthly subscription whereas in hardware, you have
              to purchase it.
              
              It would be interesting tho to compare hardware-as-a-service or
              similar as well but I don't know if I see them for individual
              stuff.
       
                andruby wrote 1 day ago:
                100x is probably hyperbole. 37 signals saved between 50 and 66%
                in hosting costs when moving from cloud to self hosted.
                
  HTML          [1]: https://basecamp.com/cloud-exit
       
                  Imustaskforhelp wrote 22 hours 24 min ago:
                  Considering the fact that ramflation happened, and we assume
                  the cost of hardware to be spread between 5 years, someone
                  please run the numbers again.
                  
                  It would be interesting to see the scale of basecamp. I just
                  saw right now that hetzner offers 1024 GB of ram for around
                  500$
                  
                  Um 37signals spent around 700k$ I think on servers so if
                  someone has this much amount of money floating around,
                  perhaps.
                  
                  Yea I looked at their numbers and they mentioned a
                  1300$/month for just hardware for 1.3 TB and so hetzner might
                  still make economically more sense somehow.
                  
                  I think the problem for some of these is that they go too
                  hard on the managed services and those are good sometimes as
                  well but like, there are cheaper managed cloud than aws etc.
                  as well (upcloud,ovh etc.) but at the end of the day,  it's
                  good to remember that if it bothers you financially, you can
                  migrate.
                  
                  Honestly do whatever you want. Start however you want because
                  like these things definitely interest me (which is why I am
                  here) but I think most compute providers have really gone the
                  path of the bottom.
                  
                  The problem usually feels to me when you are worried that you
                  might break the term of service or anything similar if you
                  are at scale or anything, not that this stops exactly being a
                  problem with colo but that still brings more freedom
                  
                  I think if one wants freedom, they can always contact some
                  compute providers and find what can support their use case
                  the best while still being economical. And then choose the
                  best option from the multitude of available options.
                  
                  Also vertical scaling is a beast.
                  
                  I really went into learning a lot about cloud prices recently
                  etc.  so I want to ask a question but can you tell me more
                  about the servers that 37signals brought or any other company
                  you know of, I can probably create a list when it makes sense
                  and when it doesn't perhaps and the best options available in
                  markets.
       
                  victorbjorklund wrote 1 day ago:
                  But they have scale. A small company will save less because
                  it’s not that much more work to handle say a 100 node
                  kubernetes cluster vs a 10 node kubernetes cluster.
       
                    kachapopopow wrote 1 day ago:
                    A small company benefits more than anyone since it's not
                    rocket science to learn these things so you can just put on
                    your system administrator hat once every few weeks, would
                    not be ideal to lose that employee which is why I always
                    suggest a couple of people picking up this very useful
                    skill.
                    
                    But I don't know much about how it is a real world and
                    normal 9 to 5 I have taken up jobs from system
                    administration to reverse engineering and to even making
                    plugins and infrastructure for minecraft I generally only
                    work these days when people don't have any other choice and
                    need someone who is pretty good at everything so I am
                    completely out of the loop.
       
                      victorbjorklund wrote 10 hours 51 min ago:
                      It takes me almost equal time to manage a Kubernetes
                      cluster with 10 nodes as with 100 nodes. If I have to
                      spend say 5 hours per month and with a cost of say 100
                      usd/hour it means it cost 500 usd/month to manage. If
                      leaving cloud saves say 100 usd/node from 200 usd/node it
                      means for a small company its cost would be
                      (10100)+500=1500 usd/month which is a cost reduction of
                      25%. For a large company it would be greater
                      (100100)+500=10500 which means a 47.5% savings. Do you
                      see why the savings are greater with scale?
       
                        kachapopopow wrote 8 hours 7 min ago:
                        well bigger clusters have weird complexities and
                        require specialized knowledge if you don't want your
                        production to blow up every couple of months.
                        
                        small clusters can be run with minimal knowledge which
                        means the added cost is $0.
       
                    shimman wrote 1 day ago:
                    Self hosting nowadays is way way way way easier than you're
                    thinking. I'm involved working with various political
                    campaigns and the first thing I help every team do is
                    provision a 10 year old laptop, flash linux, and setup a
                    DDNS. A $100 investment is more than enough for a campaign
                    of 10-20ish dedicated workers that will only be hitting
                    this system one/two users at a time. If I can teach a
                    random 70 year old retiree or 16 year old on how to type a
                    dozen different commands, I'm sure a paid professional can
                    learn too.
                    
                    People need to realize that when you selfhost you can
                    choose to follow physical business constraints. If no one
                    is in the office to turn on a computer, you're good. Also
                    consumer hardware is so powerful (even 10 year old
                    hardware) that can easily handle 100k monthly active users,
                    which is barely 3k daily users, and I doubt most SMBs
                    actually need to handle anything beyond 500 concurrent
                    users hardware wise. So if that's the choice it comes down
                    to writing better and more performant software, which is
                    what is lacking nowadays.
                    
                    People don't realize how good modern tooling + hardware has
                    come. You can get by with very little if you want.
                    
                    I'd bet my years salary that a good 40% of AWS customers
                    could probably be fine with a single self hosted server
                    using basic plug in play FOSS software on consumer
                    hardware.
                    
                    People in our industry have been selling massive lies on
                    the need for scalability, the amount of companies that
                    require such scalability are quite small in reality. You
                    don't need a rocket ship to walk 2 blocks, and it often
                    feels like this is the case in our industry.
                    
                    If self hosting is "too scary" for your business, you can
                    buy a $10 VPS but after one single year you can probably
                    find decade old hardware that is faster than what you pay
                    for.
       
                      oldandboring wrote 1 day ago:
                      I'm in your camp but I go for the cheap VPS. Lightsail
                      and DigitalOcean are amazing -- for $10/mo or less you
                      get a cheap little box that's essentially everything you
                      describe, but with all the peace of mind that comes from
                      not worrying about physical security, physical backups,
                      dynamic IPs/DDNS, and running out of storage space.
                      You're right that almost nobody needs most of the stuff
                      that AWS/GCP/Azure can do, but some things in the cloud
                      are worth paying for.
       
                        Imustaskforhelp wrote 22 hours 37 min ago:
                        Yea absolutely this. This is what I was saying, so like
                        having a vps for starting out definitely makes sense.
                        Like, I think when it starts making sense to build your
                        own cloud is around the 500-1000$ mark per month
                        
                        I searched hetzner and honestly at just around the 500
                        mark (506.04) seeing it on their refurbished auction
                        for sale, I can get around 1024 GB of ram AMD EPYC 7502
                        2 x 1.92 TB Datacenter SSD
                        
                        In this ramflation imagine getting so much ram would
                        cost a bank.
                        
                        Like I love homelabbing too and I think that an old
                        laptop sometimes can be enough for basic things to even
                        more modern things but I did some calculations and
                        colocrossing or the professional renting model or even
                        buying new hardware model in this ramflation would
                        probably not work.
                        
                        It's sad but like, the only place it might make sense
                        is if you can get yourself a good firewall and have an
                        old laptop or server and will do something like this
                        but even then I have heard it be described as not worth
                        it by many but I think its an interesting experiment.
                        
                        Also regarding the 1024 GB of ram's, holy.. I wonder
                        how many programs need so much ram. I will still do
                        homelabbing and things but like, y'know I am kinda hard
                        pressed in how much we can recommend if ramflation is
                        so much and that's when I saw someone originally
                        writing saying 100x I really wondered how much is
                        enough and at what scale or what others think
       
                      victorbjorklund wrote 1 day ago:
                      Yea, but admit that I am right that it is not that much
                      harder to manage 100 nodes vs 10 nodes. (At least you can
                      agree you don’t need 10x more staff to manage 100 nodes
                      instead of 10)
                      
                      That’s the key. If you need one person or 3 persons
                      doesn’t matter. The point is the salaries are fixed
                      costs.
       
                        shimman wrote 21 hours 31 min ago:
                        Ah sorry, I completely misread. You are right, and to
                        add another dimension even when you choose to go to the
                        cloud you still have to hire nearly the same amount of
                        personal to deal with those tools. I've never worked at
                        a software company that didn't have devs specifically
                        to deal with cloud issues and integrations.
       
                        mystifyingpoi wrote 1 day ago:
                        You are right, but it's a feature of Kubernetes
                        actually. If you treat nodes as cattle, then it doesn't
                        matter if there is 10 or 100 or 1000, as long as the
                        apiserver can survive the load and upgrades don't take
                        too long (though upgrades/maintenance can be done
                        slowly for even days without any problems).
                        
                        But all the stateful crap (like databases) gets
                        trickier and harder the more machines you have.
       
        simonw wrote 1 day ago:
        The problem with sandboxing solutions is that they have to provide very
        solid guarantees that code can't escape the sandbox, which is really
        difficult to do.
        
        Any time I'm evaluating a sandbox that's what I want to see: evidence
        that it's been robustly tested against all manner of potential attacks,
        accompanied by detailed documentation to help me understand how it
        protects against them.
        
        This level of documentation is rare! I'm not sure I can point to an
        example that feels good to me.
        
        So the next thing I look for is evidence that the solution is being
        used in production by a company large enough to have a dedicated
        security team maintaining it, and with real money on the line for if
        the system breaks.
       
          andrewaylett wrote 1 day ago:
          Cloudflare needs to worry about their sandbox, because they are
          running your code and you might be malicious.  You have less reason
          to worry: if you want to do something malicious to the box your
          worker code is running on, you already have access (because you're
          self-hosting) and don't need a sandbox escape.
       
            AgentME wrote 20 hours 32 min ago:
            Automatically running LLM-written code (where the LLM might be
            naively picking a malicious library to use, is poisoned by
            malicious context from the internet, or wrongly thinks it should
            reconfigure the host system it's executing code on) is an
            increasingly popular use-case where sandboxing is important.
       
              andrewaylett wrote 2 hours 39 min ago:
              That scenario is harder to distinguish from the adversarial case
              that public hosts like Cloudflare serve.  I don't think it's
              unreasonable to say that a project like OpenWorkers can be useful
              without meeting the needs of that particular use-case.
       
          m11a wrote 1 day ago:
          I agree, and as much as I think AI helps productivity, for a high
          security solution,
          
          > Recently, with Claude's help, I rewrote everything on top of
          rusty_v8 directly.
          
          worries me
       
            CuriouslyC wrote 21 hours 40 min ago:
            Have you tried Opus 4.5?
       
          ZiiS wrote 1 day ago:
          I think this is, sandboxed so your debugging didn't need to consider
          interactions, not sandboxes so you can run untrusted code.
       
          imcritic wrote 1 day ago:
          I don't think what you want us even possible. How would such
          guarantees even look like? "Hello, we are a serious cybersec firm and
          we have evaluated the code and it's pretty sound, trust us!"?
          
          "Hello, we are a serious cybersec firm and we have evaluated the code
          and here are our test with results that proof that we didn't find
          anything, the code is sound; Have we been through? We have, trust
          us!"
       
            AgentME wrote 20 hours 35 min ago:
            Something like "all code is run with no permissions to the
            filesystem or external IO by default, you have to do this to add
            fine-grained permissions for IO, the code is run within an
            unprivileged process that's sandboxed using standard APIs to defend
            in depth against possible v8 vulnerabilities, here's how this
            system protects against obvious possible attacks..." would be
            pretty good. Obviously it's not proof it's all implemented
            perfectly, but it would be a quick sign that the project is miles
            ahead of a naive implementation, and it would give someone
            interested some good pointers on what parts to start reviewing.
       
              max_lt wrote 11 hours 28 min ago:
              This is exactly where we see things heading. The trust model is
              shifting - code isn't written by humans you trust anymore, it's
              generated by models that can be poisoned, confused, or just pick
              the wrong library.
              
              We're thinking about OpenWorkers less as "self-hosted Cloudflare
              Workers" and more as a containment layer for code you don't fully
              control. V8 isolates, CPU/memory limits, no filesystem access,
              network via controlled bindings only.
              
              We're also exploring execution recording - capture all I/O so you
              can replay and audit exactly what the code did.
              
              Production bug -> replay -> AI fix -> verified -> deployed.
       
            d4mi3n wrote 23 hours 54 min ago:
            Other response address how you could go about this, but I'd just
            like to note that you touch on the core problem of security as a
            domain: At the end of the day, it's a problem of figuring out who
            to trust, how much to trust them, and when those assessments need
            to change.
            
            To use your example: Any cybersecurity firm or practitioner worth
            their salt should be *very* explicit about the scope of their
            assessment.
            
            - That scope should exhaustively detail what was and wasn't tested.
            
            - There should be proof of the work product, and an intelligible
            summary of why, how, and when an assessment was done.
            
            - They should give you what you need to have confidence in *your
            understanding of* you security posture as well as evidence that you
            *have* a security posture you can prove with facts and data.
            
            Anybody who tells you not to worry and take their word for
            something should be viewed with extreme skepticism. It is a
            completely unacceptable frame of mind when you're legally and
            ethically responsible for things you're stewarding for other
            people.
       
            simonw wrote 1 day ago:
            That's the problem! It's really hard to find trustworthy sandboxing
            solutions, I've been looking for a long time. It's kind of my white
            whale.
       
              laurencerowe wrote 20 hours 17 min ago:
              As I understand it separate isolates in a single process are
              inherently less secure than separate processes (e.g. Chrome's
              site isolation) which is again less secure than virtualization
              based solutions.
              
              As a TinyKVM / KVM Server contributor I'm obviously hopeful our
              approach will work out, but we still have some way to go to get
              to a level of polish that makes it easy to get going with and
              have the confidence of production level experience.
              
              TinyKVM has the advantage of a much smaller surface area to
              secure as a KVM based solution and the ability to offer fast
              per-request isolation as we can reset the VM state a couple of
              orders of magnitude faster than v8 can create a new isolate from
              a snapshot.
              
  HTML        [1]: https://github.com/libriscv/kvmserver
       
              indigodaddy wrote 1 day ago:
              I imagine you messed about with Sandstorm back in the day?
       
            gpm wrote 1 day ago:
            In terms of a one off product without active support - the only
            thing I can really imagine is a significant use of formal methods
            to prove correctness of the entire runtime. Which is of course
            entirely impractical given the state of the technology today.
            
            Realistically security these days is an ongoing process, not a one
            off, compare to cloudflare's security page: [1] (to be clear when I
            use the pronoun "we" I'm paraphrasing and not personally employed
            by cloudflare/part of this at all)
            
            - Implicit/from other pieces of marketing: We're a reputably
            company with these other big reputable companies who care about
            security and are juicy targets for attacks using this product.
            
            - We  update V8 within 24 hours of a security update, compared to
            weeks for the big juicy target of Google Chrome.
            
            - We use various additional sandboxing techniques on top of V8,
            including the complete lack of high precision timers, and various
            OS level sandboxing techniques.
            
            - We detect code doing strange things and move it out of the
            multi-tennant environment into an isolated one just in case.
            
            - We detect code using APIs that increase the surface area (like
            debuggers) and move it out of the multi-tennant environment into an
            isolated on just in case.
            
            - We will keep investing in security going forwards.
            
            Running secure multi-tenant environments is not an easy problem. It
            seems unlikely that it's possible for a typical open source project
            (typical in terms of limited staffing, usually including a complete
            lack of on-call staff) to release software to do so today.
            
  HTML      [1]: https://developers.cloudflare.com/workers/reference/securi...
       
              max_lt wrote 1 day ago:
              Agreed. Cloudflare has dedicated security teams, 24h V8 patches,
              and years of hardening – I can't compete with that. The
              realistic use case for OpenWorkers is running your own code on
              your own infra, not multi-tenant SaaS. I will update the docs to
              reflect this.
       
          max_lt wrote 1 day ago:
          Fair point. The V8 isolate provides memory isolation, and we enforce
          CPU limits (100ms) and memory caps (128MB). Workers run in separate
          isolates, not separate processes, so it's similar to Cloudflare's
          model. That said, for truly untrusted third-party code, I'd recommend
          running the whole thing in a container/VM as an extra layer. The
          sandboxing is more about resource isolation than security-grade
          multi-tenancy.
       
            gpm wrote 1 day ago:
            I think you should consider adjusting the marketing to reflect
            this. "untrusted JavaScript" -> "JavaScript", "Secure sandboxing
            with CPU (100ms) and memory (128MB) limits per worker" ->
            "Sandboxing with CPU (100ms) and memory (128MB) limits per worker",
            overhauling [1] .
            
            Over promising on security hurts the credibility of the entire
            project - and the main use case for this project is probably
            executing trusted code in a self hosted environment not
            "execut[ing] untrusted code in a multi-tenant environment".
            
  HTML      [1]: https://openworkers.com/docs/architecture/security
       
              max_lt wrote 1 day ago:
              Great point, thanks. Just updated the site – removed
              "untrusted" and "secure", added a note clarifying the threat
              model
       
          ForHackernews wrote 1 day ago:
          Not if you're self-hosting and running your own trusted code, you
          don't. I care about resource isolation, not security isolation,
          between my own services.
       
            twosdai wrote 1 day ago:
            Completely agree. There are some apps that unfortunately need to
            care about some level of security isolation, but with an open
            workers they could just put those specific workers on their own
            isolated instance.
       
          samwillis wrote 1 day ago:
          Yes, exactly. The other reason Cloudflare workers runtime is secure
          is that they are incredibly active at keeping it patched and up to
          date with V8 main. It's often ahead of Chrome in adopting V8
          releases.
       
            oldmanhorton wrote 1 day ago:
            I didn’t know this, but there are also security downsides to
            being ahead of chrome — namely, all chrome releases take
            dependencies on “known good” v8 release versions which have at
            least passed normal tests and minimal fuzzing, but also v8 releases
            go through much more public review and fuzzing by the time they
            reach chrome stable channel. I expect if you want to be as secure
            as possible, you’d want to stay aligned with “whatever v8 is in
            chrome stable.”
       
              kentonv wrote 21 hours 19 min ago:
              Cloudflare Workers often rolls out V8 security patches to
              production before Chrome itself does. That's different from beta
              vs. stable channel. When there is a security patch, generally all
              branches receive the patch at about the same time.
              
              As for beta vs. stable, Cloudflare Workers is generally somewhere
              in between. Every 6 weeks, Chrome and V8's dev branch is promoted
              to beta, beta branch to stable, and stable becomes obsolete.
              Somewhere during the six weeks between verisons, Cloudflare
              Workers moves from stable to beta. This has to happen before the
              stable version becomes obsolete, otherwise Workers would stop
              receiving security updates. Generally there is some work involved
              in doing the upgrade, so it's not good to leave it to the last
              moment. Typically Workers will update from stable to beta
              somewhere mid-to-late in the cycle, and then that beta version
              subsequently becomes stable shortly thereafter.
              
              (I'm the lead engineer for Cloudflare Workers.)
       
                max_lt wrote 19 hours 37 min ago:
                Thanks for the clarification on CF's V8 patching strategy, that
                24h turnaround is impressive and exactly why I point people to
                Cloudflare when they need production-grade multi-tenant
                security.
                
                OpenWorkers is really aimed at a different use case: running
                your own code on your own infra, where the threat model is
                simpler. Think internal tools, compliance-constrained
                environments, or developers who just want the Workers DX
                without the vendor dependency.
                
                Appreciate the work you and the team have done on Workers, it's
                been the inspiration for this project for years.
       
          vlovich123 wrote 1 day ago:
          Since it’s self hosted the sandboxing aspect at the
          language/runtime level probably matters just a little bit less.
       
        vmg12 wrote 1 day ago:
        Does this actually use the cloudflare worker runtime or is this just a
        way to run code in v8 isolates?
       
          max_lt wrote 1 day ago:
          It's a custom V8 runtime built with rusty_v8, not the actual
          Cloudflare runtime (github.com/openworkers/openworkers-runtime-v8).
          The goal is API compatibility – same Worker syntax (fetch handler,
          Request/Response, etc.) so you can migrate code easily. Under the
          hood it's completely independent.
       
        indigodaddy wrote 1 day ago:
        Perhaps it might be helpful to some to also lay out the things that
        don't work today (or eg roadmap of what's being worked on that doesn't
        currently work?).  Anyway, looks very cool!
       
          max_lt wrote 1 day ago:
          Good idea! Main things not yet implemented: Durable Objects,
          WebSockets, HTMLRewriter, and cache API. Next priority is execution
          recording/replay for debugging. I'll add a roadmap section to the
          docs.
       
       
   DIR <- back to front page