Murati's Thinking Machines Inks Multibillion Google Cloud Deal
Krasa AI
2026-04-23
5 minute read
Murati's Thinking Machines Inks Multibillion Google Cloud Deal
Google Cloud announced Wednesday at Cloud Next that Thinking Machines Lab — the fourteen-month-old AI startup founded by former OpenAI CTO Mira Murati — has signed a multibillion-dollar agreement to expand its use of Google's AI infrastructure. The deal is valued in the single-digit billions and gives Thinking Machines access to Google's latest systems built around Nvidia's GB300 chips.
The agreement is not exclusive. Thinking Machines can — and will — keep using other cloud providers alongside Google. But the scale of the commitment puts Google Cloud in a position most observers didn't expect it to reach six months ago: inside the inner compute circle of the most-watched AI startup of the 2025 generation.
Context: Who Thinking Machines Is and Why the Compute Matters
Murati left OpenAI in September 2024 and founded Thinking Machines in February 2025, pulling several senior researchers with her. The company closed a $2 billion seed round at a $12 billion valuation almost immediately — at the time, the largest seed round in Silicon Valley history. It then went quiet for most of the following year.
That changed in October 2025, when Thinking Machines shipped its first product: Tinker. Tinker lets developers build custom frontier AI models using reinforcement learning — essentially, it automates the RL fine-tuning loop that every frontier lab has had to build internally. You bring a problem; Tinker produces a model tuned for your workload.
That architecture is what makes the Google deal necessary. RL at frontier scale broadcasts weight updates across thousands of GPUs thousands of times per run, with every iteration requiring near-instantaneous transfer across the entire fleet. Tinker turns that loop — expensive for one frontier lab — into a product that runs it on behalf of many customers. Which means Thinking Machines now needs the compute footprint of several frontier labs stacked together.
What's Actually in the Deal
Google isn't disclosing exact dollar numbers, but "single-digit billions" places the deal in the same class as the largest third-party compute commitments in the industry. The key components:
Access to GB300 capacity. The GB300 is Nvidia's latest Blackwell-generation chip, and Google Cloud is a launch partner. Thinking Machines gets priority access through Google's Hypercomputer architecture, which bundles compute with storage, networking, and orchestration.
Jupiter network. Google called out Jupiter — its internal data center network — as enabling the "near-instantaneous weight transfers" Tinker's RL architecture requires. That's the kind of infrastructure detail that usually doesn't appear in press releases but matters enormously for RL.
A4X Max VMs. Thinking Machines reported 2x faster training and serving speeds on Google's A4X Max VMs versus prior-generation GPUs — the kind of concrete number companies don't publish unless they want to signal lock-in.
Continued multi-cloud. The deal is explicitly non-exclusive. Google's willingness to accept that is itself a signal about how competitive dynamics are playing out — customers have leverage they didn't have eighteen months ago.
Industry Impact
For Google Cloud, this deal is validation of a strategy the company has been pushing for the last two years: offer the best Nvidia-based infrastructure, paired with Google's own networking and orchestration, and sell to the AI labs that would otherwise default to Microsoft Azure or AWS. Google Cloud was historically the third-place cloud for AI workloads. Anthropic training on Google TPUs, Apple building a custom deal around Google infrastructure, and now Thinking Machines locking in on GB300 capacity — that's a pattern, not a coincidence.
For Nvidia, the Thinking Machines deal is a reminder that the company's near-monopoly on frontier training chips is holding, at least for now. Google announced its 8th-gen TPUs (TPU 8t for training, TPU 8i for inference) at the same Cloud Next event. But the flagship customer announcement on the day was built on Nvidia GB300, not Google silicon. That's worth noticing.
For Microsoft and AWS, the deal is a shot. Microsoft has OpenAI — but OpenAI is diversifying. AWS has Anthropic, locked in through the recent $100 billion commitment and 5 GW of Trainium. Thinking Machines, the third pole in the "ex-OpenAI founder's new lab" trifecta, now has Google Cloud as its primary public cloud partner. The three frontier labs have split three different ways.
Expert Perspectives
TechCrunch's exclusive noted that Google can "support the startup's reinforcement learning workloads, which Tinker's architecture relies on." That framing — capability fit rather than raw price — is what industry analysts say separates 2026 compute deals from the 2023-2024 era.
SiliconANGLE emphasized the non-exclusive nature of the deal as a sign that frontier labs are negotiating from a position of leverage. Two years ago, a lab took whatever capacity it could get. Today, they have choices — and they're using them.
What's Next
Watch for Tinker's commercial traction over the next two quarters. If Tinker becomes the default way enterprises fine-tune frontier models on their own data, the compute Thinking Machines just locked in starts looking conservative instead of aggressive. If Tinker struggles against competing RL-based fine-tuning products, the GB300 commitment will become a more painful bet.
Watch for Thinking Machines' second product. The company has been unusually quiet about its roadmap, but a second product launch in the next six months is widely expected. The infrastructure footprint now in place can support a much larger product portfolio than Tinker alone.
Watch Google Cloud's next earnings call. Thomas Kurian has been building toward a narrative shift — Google Cloud as the AI cloud — and the Thinking Machines announcement is one of the clearer data points that narrative is starting to land.
Bottom Line
The Thinking Machines-Google deal is the week's most interesting cloud announcement — not for dollar size, but for what it signals about where AI labs are putting their money. The era of one dominant cloud provider per frontier lab is over. Every major lab now has multi-cloud leverage, and providers are competing on infrastructure fit, not just price.
Sources
Don't fall behind
Expert AI Implementation →Related Articles
China Blocks Meta's $2B Acquisition of AI Startup Manus
Beijing's NDRC vetoed Meta's $2 billion takeover of AI agent startup Manus, citing technology transfer concerns and forcing the deal's unwinding.
min read
EU Expands Digital Markets Act to Target Cloud and AI
European regulators will extend the Digital Markets Act to cloud services and AI, targeting Amazon, Microsoft, and major virtual assistant platforms.
min read
Meta Reserves 1 GW of Space Solar to Power Its AI Data Centers
Meta signed deals with Overview Energy and Noon Energy for up to 1 GW of orbital solar and 100 GWh of long-duration storage to power AI infrastructure.
min read