NVIDIA has announced that Meta and Oracle will enhance their AI data center networks using the company’s new Spectrum-X™ Ethernet platform — a groundbreaking move aimed at accelerating large-scale AI workloads and improving training efficiency across massive GPU clusters.
Boosting AI Data Centers with Spectrum-X Ethernet
Meta and Oracle are integrating NVIDIA’s Spectrum-X Ethernet switches into their cloud and AI ecosystems. The platform serves as an open, accelerated networking architecture designed to optimize AI data center operations, speeding up deployment at scale while unlocking exponential gains in AI training efficiency and reducing time to insights.
“Trillion-parameter models are transforming data centers into giga-scale AI factories,” said Jensen Huang, founder and CEO of NVIDIA.
“Industry leaders like Meta and Oracle are standardizing on Spectrum-X Ethernet to drive this industrial revolution. Spectrum-X is not just faster Ethernet — it’s the nervous system of the AI factory, enabling hyperscalers to connect millions of GPUs into a single giant computer to train the largest models ever built.”
Oracle’s Giga-Scale AI Vision
Oracle plans to build giga-scale AI factories powered by the NVIDIA Vera Rubin architecture, interconnected with Spectrum-X Ethernet.
“Oracle Cloud Infrastructure is designed from the ground up for AI workloads, and our partnership with NVIDIA extends that AI leadership,” said Mahesh Thiagarajan, Executive Vice President of Oracle Cloud Infrastructure. “By adopting Spectrum-X Ethernet, we can interconnect millions of GPUs with breakthrough efficiency so our customers can train, deploy, and benefit from the next wave of generative and reasoning AI.”
Meta Scales AI Infrastructure with Open Networking
Meta will also deploy Spectrum Ethernet switches as part of its Facebook Open Switching System (FBOSS) — an open-source software platform for managing network switches at scale. The integration will enhance Meta’s AI infrastructure by enabling faster deployments and improved training efficiency for its large language models.
“Meta’s next-generation AI infrastructure requires open and efficient networking at a scale the industry has never seen before,” said Gaya Nagarajan, Vice President of Networking Engineering at Meta.
“By integrating NVIDIA Spectrum Ethernet into the Minipack3N switch and FBOSS, we can extend our open networking approach while unlocking the efficiency and predictability needed to train ever-larger models and bring generative AI applications to billions of people.”
Inside NVIDIA Spectrum-X Ethernet
The NVIDIA Spectrum-X Ethernet platform, which includes Spectrum-X switches and SuperNICs, is the first Ethernet solution purpose-built for the trillion-parameter model era. It allows hyperscalers to connect millions of GPUs with unprecedented efficiency and scalability.
Unlike traditional Ethernet, which often faces flow collisions that limit data throughput to around 60%, Spectrum-X delivers up to 95% data throughput with advanced congestion-control technology. This leap in performance redefines the economics of AI-scale networking, making it possible to link data centers across regions and even continents into vast, connected AI super-factories.
A Full-Stack AI Networking Revolution
Spectrum-X extends NVIDIA’s full-stack platform — spanning GPUs, CPUs, NVLink™, and AI software — to deliver seamless performance from compute to network. With adaptive routing, congestion control, and AI-driven telemetry, it ensures predictable and efficient performance for even the largest AI training and inference clusters.
As Meta and Oracle embrace NVIDIA’s Spectrum-X Ethernet, they mark a new chapter in the race to build scalable, energy-efficient, and high-performance AI data centers — redefining the future of cloud and generative AI infrastructure.