Subj : 'Enfabrica has the coolest technology': Nvidia spent nearly $1 bi To : All From : TechnologyDaily Date : Fri Sep 26 2025 22:45:08 'Enfabrica has the coolest technology': Nvidia spent nearly $1 billion on a chip maker to secure its future on the same day it gave Intel $5 billion - and here's why this is actually a more important investment Date: Fri, 26 Sep 2025 21:36:00 +0000 Description: Nvidias $900 million Enfabrica acquisition targets AI scaling challenges using ACF-S chips, EMFASYS memory, and high-radix multipath networks. FULL STORY ======================================================================Nvidias acquisition brings Enfabrica engineers directly into its AI ecosystem EMFASYS chassis pools up to 18TB of memory for GPU clusters Elastic memory fabric frees HBM for time-sensitive AI tasks efficiently Nvidias decision to spend more than $900 million on Enfabrica was something of a surprise, especially as it came alongside a separate $5 billion investment in Intel . According to ServeTheHome , Enfabrica has the coolest technology, likely because of its unique approach to solving one of AIs largest scaling problems: tying tens of thousands of computing chips together so they can operate as a single system without wasting resources. This deal suggests Nvidia believes solving interconnect bottlenecks is just as critical as securing chip production capacity. A unique approach to data fabrics Enfabricas Accelerated Compute Fabric Switch (ACF-S) architecture was built with PCIe lanes on one side and high-speed networking on the other. Its ACF-S Millennium device is a 3.2Tbps network chip with 128 PCIe lanes that can connect GPUs, NICs, and other devices while maintaining flexibility. The companys design allows data to move between ports or across the chip with minimal latency, bridging Ethernet and PCIe/CXL technologies. For AI clusters, this means higher use and fewer idle GPUs waiting for data, which translates into better return on investment for costly hardware. Another piece of Enfabricas offering is its EMFASYS chassis, which uses CXL controllers to pool up to 18TB of memory for GPU clusters. This elastic memory fabric allows GPUs to offload data from their limited HBM memory into shared storage across the network. By freeing up HBM for time-critical tasks, operators can reduce token processing costs. Enfabrica said reductions could reach up to 50% and allow inference workloads to scale without overbuilding local memory capacity. For large language models and other AI workloads, such capabilities could become essential. The ACF-S chip also offers high-radix multipath redundancy. Instead of a few massive 800Gbps links, operators can use 32 smaller 100Gbps connections. If a switch fails, only about 3% of bandwidth is lost, rather than a large portion of the network going offline. This approach could improve cluster reliability at scale, but it also increases complexity in network design. The deal brings Enfabricas engineering team, including CEO Rochan Sankar, directly into Nvidia, rather than leaving such innovation to rivals like AMD or Broadcom. While Nvidias Intel stake ensures manufacturing capacity, this acquisition directly addresses scaling limits in AI data centers . You might also like Take a look at the best VPN with antivirus available right now These are the best password managers we've found so far 'NVLink is the key': Analysts ponder on Intel + Nvidia and what it means for TSMC, AMD and others ====================================================================== Link to news story: https://www.techradar.com/pro/enfabrica-has-the-coolest-technology-nvidia-spen t-nearly-usd1-billion-on-a-chip-maker-to-secure-its-future-on-the-same-day-it- gave-intel-usd5-billion-and-heres-why-this-is-actually-a-more-important-invest ment --- Mystic BBS v1.12 A49 (Linux/64) * Origin: tqwNet Technology News (1337:1/100) .