CoreWeave and Meta Expand AI Cloud Deal to $35 Billion
Krasa AI
2026-04-09
4 minute read
CoreWeave and Meta Expand AI Cloud Deal to $35 Billion
Meta just placed the largest bet on external AI infrastructure in tech history. The company is committing another $21 billion to CoreWeave, bringing the total partnership value to $35 billion and signaling that the AI compute arms race is far from over.
The expanded agreement, announced on April 9, extends through December 2032 and will see CoreWeave providing AI cloud capacity from multiple data centers. What makes this deal particularly notable is the hardware at its center — early deployments of Nvidia's next-generation Vera Rubin platform, the successor to the Blackwell architecture that currently dominates AI data centers.
Why Meta Needs External Compute
Meta isn't short on its own data centers. The company is building a 2-gigawatt facility in Louisiana and plans to spend roughly $65 billion on infrastructure in 2026 alone. So why outsource $35 billion to CoreWeave?
The answer comes down to speed and flexibility. Building data centers from scratch takes years. CoreWeave specializes in GPU-rich cloud infrastructure that's purpose-built for AI workloads, and it can deliver capacity faster than Meta can build it. With Muse Spark — Meta's first model from its Superintelligence Labs — now rolling out across WhatsApp, Instagram, Facebook, and Messenger, the company needs inference capacity now, not in 2028.
That shift toward inference is a key detail in this agreement. Training a model is a one-time (if expensive) process. Running that model for billions of users across multiple platforms is a continuous, massive compute demand. Meta needs infrastructure optimized for always-on AI inference, and CoreWeave has positioned itself to deliver exactly that.
What This Means for CoreWeave
For CoreWeave, the deal solves a critical business risk. In 2024, Microsoft represented 62% of the company's revenue — a concentration that made investors nervous. With this expanded Meta commitment, no single customer will account for more than 35% of total revenue. That's a much healthier business profile for a company that went public just last year.
CoreWeave's stock rose 3.5% on the announcement, closing at $92. The company also disclosed plans to raise $3 billion in fresh debt to fund the infrastructure buildout. The new spending is scheduled between 2027 and 2032, giving CoreWeave a long runway of contracted revenue.
The deal validates CoreWeave's bet on becoming the go-to cloud provider for AI-native workloads. While Amazon, Google, and Microsoft dominate general cloud computing, CoreWeave has carved out a niche by focusing exclusively on GPU compute for AI training and inference.
The Nvidia Connection
The inclusion of Nvidia's Vera Rubin platform in this deal is significant. Vera Rubin represents Nvidia's next major architecture after Blackwell, promising substantial improvements in efficiency and performance for AI inference workloads. Meta will be among the first hyperscale customers to deploy these chips at scale through CoreWeave's infrastructure.
This three-way relationship — Meta providing the demand, CoreWeave managing the infrastructure, and Nvidia supplying the silicon — illustrates how the AI supply chain has matured into a structured, multi-billion-dollar ecosystem.
Industry Implications
The sheer size of this commitment suggests Meta's AI ambitions extend well beyond chatbots and content recommendations. Running inference for models like Muse Spark across platforms serving more than 3 billion daily users requires an unprecedented scale of compute.
It also raises the stakes for competitors. If Meta is willing to spend $35 billion with a single cloud partner — on top of its own massive capex — the compute requirements for competitive AI deployment are higher than most companies can match.
For the broader cloud infrastructure market, the deal confirms that AI workloads are driving a once-in-a-generation buildout. The money flowing into data centers, GPU clusters, and power infrastructure shows no signs of slowing.
What's Next
The infrastructure buildout will ramp through 2027 as CoreWeave deploys Vera Rubin-powered data centers for Meta. Investors should watch for updates on the custom chip front as well — Meta continues developing its own MTIA (Meta Training and Inference Accelerator) chips, which could eventually reduce its dependence on external GPU clouds.
For now, though, the message is clear: AI inference at scale is expensive, demand is growing faster than anyone can build, and the companies that control compute infrastructure hold enormous leverage in the AI era.
Don't fall behind
Expert AI Implementation →Related Articles
Anthropic Starts Checking IDs: Claude Now Asks for a Passport
Anthropic quietly rolled out passport and selfie verification for select Claude users via Persona — a first among major AI labs and a jolt to its privacy brand.
min read
Google Puts AI Mode Inside Chrome: Side-by-Side Browsing Goes Live
Google's AI Mode now opens web pages next to the chat in Chrome, pulls multi-tab context, and embeds directly in the New Tab page — starting today in the US.
min read
Google's Gemini 3.1 Flash TTS Lets You Direct AI Voices With Text
Google's Gemini 3.1 Flash TTS ships 200+ audio tags, 70+ languages, native multi-speaker dialogue, and SynthID watermarking — already #2 on TTS leaderboards.
min read