OpenAI + NVIDIA: $100B Bet on 10GW AI Infrastructure
Everything you need to know about OpenAI and NVIDIA $100B deal.
OpenAI and Nvidia signed a letter of intent: Nvidia may invest up to $100B in OpenAI to fund AI data centres using millions of Nvidia chips. [RELEASE]
Why it’s important: It is Nvidia pre-paying one of its largest customers to ensure demand. It validates that compute scarcity = strategy, as Nvidia is investing $100B just to guarantee demand and erecting a formidable moat against rivals like AMD, Intel, and Google's in-house silicon.
Let’s dive in.
What’s happening: OpenAI and NVIDIA are joining forces to deploy 10 gigawatts of AI datacenters, powered by millions of NVIDIA GPUs, with NVIDIA committing up to $100B in phased investment. The first gigawatt will go live in 2H 2026 on NVIDIA’s new Vera Rubin platform.
The centrepiece of the collaboration is a phased investment of up to $100B by Nvidia in OpenAI. This capital will be disbursed progressively as each gigawatt (GW) of compute capacity is deployed. An initial investment of $10B in non-voting shares is anticipated upon the signing of the definitive agreement, with Nvidia receiving an equivalent equity stake in OpenAI in return. This unique financial structure means Nvidia's investment is directly tied to the physical buildout and its own product consumption.
Why it matters: OpenAI, in particular, is confronting an immense and escalating computing bill, with estimates reaching as high as $350B. The deal is a direct response to this challenge, reflecting the "insatiable" industry-wide demand for AI infrastructure. By 2030, global power demand for data centers is projected to reach approximately 220 GW, underscoring the urgency and strategic importance of securing power and hardware capacity.
👉 Subscribe to our digital asset newsletter & join 35k+ others
“NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT. This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.”
— Jensen Huang, founder and CEO of NVIDIA
Stepping back: In January 2025, Donald Trump announced Project Stargate, a $500B joint venture by OpenAI, SoftBank and Oracle. It promised over 100,000 American jobs with the first phase starting in Abilene, Texas, with 10 data centres already in construction. This project is ongoing but has faced significant financial and logistical challenges back from its initial ambitious timeline. So, was Elon right? [Full story]
Actually, no, the day after, OpenAI announced 5 new Stargate sites with Oracle and SoftBank to build data centres.
Strategic lens:
For Nvidia: Subsidising demand is the ultimate moat. By financing customers (CoreWeave, Lambda, now OpenAI), Nvidia guarantees GPU consumption and keeps rivals like AMD, Google TPU, and Groq boxed out.
For OpenAI: A locked-in compute pipeline removes its biggest scaling bottleneck. It can focus on models + applications without resource uncertainty.
For Investors: Nvidia’s move is a hedge against the risk of oversupply if AI demand tapers. Instead of waiting, it manufactures demand.
The competitive angle: Nvidia is not just a chipmaker; it’s acting like a sovereign investor:
CoreWeave → ~$2.3B financing backed by Nvidia. They also made $6.3B cloud capacity buyback with them.
Lambda → Nvidia-aligned, favoured in allocation.
Now, OpenAI → up to $100B infusion.
This “customer financing” model could become the new normal for hyperscale AI.
Our take: Nvidia’s deal with OpenAI is the clearest sign yet of its supplier-led vertical integration strategy. Instead of just selling chips, Nvidia is financing and orchestrating entire AI infrastructure buildouts, guaranteeing demand for its GPUs while sidelining rivals like AMD and Google. This playbook isn’t new: Nvidia structured a $6.3B deal with CoreWeave that included buyback guarantees on unused capacity and backed Lambda with a $275M credit facility. By funding customers, Nvidia manufactures demand, hedges against chip oversupply, and secures long-term financial upside from their success. Even its $5B investment in Intel shows a bigger ambition: controlling more of the supply chain and positioning itself as the central architect of the AI economy.
Nvidia’s newly launched Vera Rubin platform is its biggest hardware leap yet, up to 7.5x faster than current systems and designed to handle both massive training runs and real-time inference. But the real advantage isn’t raw performance, it’s CUDA. By tying its software ecosystem directly to OpenAI’s most advanced models, Nvidia makes it far harder for AMD, Intel, or even hyperscalers to displace it. This is less a chip race and more an ecosystem lock-in play, where hardware, software, and financing are fused into one moat.
For Nvidia, the OpenAI deal is less a financial stretch and more a revenue engine. With projected free cash flow near $100B in FY26, a phased $100B investment is feasible, and analysts expect it could generate $350B in revenue by 2030, a potential 3.5x return. The stock pop reflects investor confidence that Nvidia isn’t overextending, it’s locking in demand, reducing risk, and cementing control of the AI value chain.
For OpenAI, the deal is about survival. Building AGI requires compute at a scale few can fund alone. Nvidia’s $100B commitment removes OpenAI’s biggest bottleneck, access to a reliable GPU supply, so it can keep scaling models without resource uncertainty. While OpenAI is still pursuing custom chip projects with Broadcom and TSMC, this alliance shows how urgent the compute crunch has become. The company tried to solve this through coalitions like Project Stargate (which is still just an announcement), but agreeing to deep dependency on Nvidia underscores the reality: in the AGI race, compute isn’t optional, it’s existential.
What’s next: The first gigawatt of this massive computing power is slated for deployment in the second half of 2026. Nvidia and OpenAI are both very optimistic about this new venture. But, we need to watch these signals:
If this $100B deal goes live, how does Stargate ($500B data centre project) get moved?
Does Nvidia demand equity in OpenAI or special commercial rights in return for subsidising compute?
Will Google or AWS start financing their own ecosystem plays to keep pace?
For corporates and investors, the lesson is clear: this isn’t just a chip war, it’s an infrastructure war. Companies are too dependent on a single vendor, risk disruption if contracts shift or ecosystems tighten. Winners will be those who stay hardware-agnostic, build flexible partnerships, or carve out niches where Nvidia’s dominance doesn’t reach. For investors, backing Nvidia is a bet on centralisation; backing AMD, Groq, or software specialists is a bet on fragmentation. Both paths can pay, but only with a clear conviction about where the AI stack is heading.
Competition is intensifying. AMD’s MI300 chips are closing the hardware gap, but software friction keeps adoption limited. Google and AWS are pushing their own silicon, making them both Nvidia’s largest customers and potential rivals. And inference-first challengers like Groq are carving niches with speed and efficiency. Nvidia’s counter is to cover the entire AI lifecycle, from training superclusters to low-latency inference, ensuring no competitor can box it into a single segment.
That’s all for today.
Take care,
Marc &Team
🚀 Work with us: We create pioneering thought leadership that helps digital asset and technology companies lead the conversation, earn trust and win business.
Check out our AI newsletter, AI Operator, here.
Check out our Crypto Treasury Alpha newsletter here.






