In what may be one of the largest cloud-compute contracts in history, OpenAI has agreed to purchase $300 billion worth of computing power from Oracle over a roughly five-year period. The deal, confirmed by reports from the Wall Street Journal and others, doesn’t take effect until 2027 but is already sending ripple effects through the AI industry and markets alike.
Underneath the headline number is another striking figure: the deal will involve building out 4.5 gigawatts of data-center capacity, roughly equivalent to the combined electricity usage of millions of U.S. homes or the output of more than two Hoover Dams. It’s a scale that suggests OpenAI is planning for a future where its compute needs will not just increase, they’ll explode.
This arrangement appears to be part of Project Stargate, OpenAI’s ambitious infrastructure initiative that it shares with Oracle, SoftBank, and others. While Stargate was announced earlier this year with lofty goals—initially aiming for massive AI infrastructure investment—it now has a concrete financial anchor in this deal.
From a financial and strategic perspective, the implications are intense. OpenAI currently generates somewhere in the ballpark of $10-13 billion annually, depending on the report. Signing up for $300 billion in cloud purchases pushes a huge portion of its future costs out to Oracle. That signals confidence—or perhaps necessity—in securing computing power at scale. Meanwhile, Oracle is committing a large chunk of future revenue to this one relationship, which may be risky but also could set it up as one of the backbone providers for next-generation AI.
In my view, this deal exposes just how central compute power is to AI’s next frontier. For OpenAI, whose models require vast infrastructure, the arrangement may be about more than cost: it’s about control, capacity, and scaling in a landscape where delays or shortages in compute can slow everything down. Moving away from reliance on a single provider allows OpenAI more leverage—but it also locks them into a high fixed cost with Oracle, so the execution will matter a lot.
There are risks. One is demand: whether OpenAI will truly need—or be able to optimally use—such a gargantuan amount of compute in practice. Another is energy: 4.5 GW of data center capacity means hefty energy commitments, sustainability concerns, and regulatory pressures (think power availability, emissions, sourcing). Oracle, too, must show it can build, maintain, and secure the data centers, and ensure the supply of AI-grade hardware (chips, cooling, networking). If any part of the chain breaks—whether hardware shortages, regulatory constraints, or energy bottlenecks—delays or cost overruns could tilt this from triumph to cautionary tale.
Yet, the upside is huge. If this deal works, OpenAI will be in a position to push large-scale models and AI services that require massive compute—everything from supercharged training runs to real-time inference at scale. It could accelerate how fast new features land, how responsive services are globally, and perhaps how distributed AI becomes. Oracle, on its end, steps further into the spotlight as a major infrastructure supplier in the AI boom—not just another cloud provider, but possibly one of the main gears powering generative AI’s future.
In the end, this is more than a contract. It’s a signal: AI isn’t just a software problem—it’s an infrastructure arms race. Whoever controls compute at scale—and ensures energy, reliability, access—may well control how fast AI shapes the world ahead. And this $300B deal may turn out to be one of those inflection-points.