The deal anchors up to 6 gigawatts of AI compute capacity and marks one of the largest infrastructure expansions in the race to scale advanced AI systems for billions of users.
So, what are the details of the deal? Let's explore!
TL;DR
- Meta will deploy up to 6GW of AMD Instinct GPUs under a deal potentially worth $100 billion
- AMD issued a performance based warrant for up to 160 million shares tied to shipment and stock milestones
- First 1GW deployment begins in the second half of 2026 using custom MI450 based GPUs and EPYC CPUs
Meta Commits To 6 Gigawatts Of AMD AI Compute Capacity
At the center of the agreement is Meta’s plan to power its next wave of AI systems with up to 6 gigawatts of AMD Instinct GPUs across multiple generations.
The initial deployment will rely on a custom GPU derived from AMD’s MI450 architecture, purpose built to optimize Meta’s AI workloads at scale.
Shipments supporting the first gigawatt of deployment are expected to begin in the second half of 2026. These systems will integrate 6th Gen AMD EPYC processors, codenamed Venice, alongside ROCm software within the Helios rack scale architecture.
Helios was jointly developed by AMD and Meta through the Open Compute Project to enable dense, rack level AI infrastructure. The broader partnership also aligns both companies’ roadmaps across silicon, systems, and software to streamline performance and efficiency.
“We are proud to expand our strategic partnership with Meta as they push the boundaries of AI at unprecedented scale,” said Dr. Lisa Su, chair and CEO, AMD. “This multi year, multi generation collaboration across Instinct GPUs, EPYC CPUs and rack scale AI systems aligns our roadmaps to deliver high performance, energy efficient infrastructure optimized for Meta’s workloads, accelerating one of the industry’s largest AI deployments and placing AMD at the center of the global AI buildout.”
160 Million Share Warrant Tied To Performance Milestones
In addition to hardware commitments, AMD issued Meta a performance based warrant for up to 160 million shares of AMD common stock. The warrant vests in tranches as specific shipment milestones are achieved, beginning with the first 1 gigawatt and scaling through the full 6 gigawatt target.
Further vesting is tied to AMD reaching certain stock price thresholds and Meta achieving defined technical and commercial milestones. AMD shares closed at $196.60 on Monday, while prior reporting indicated that the final tranche would require the stock to reach $600.
“We expect this partnership to drive substantial multi year revenue growth and be accretive to our non GAAP earnings per share, marking another significant step forward in delivering on our ambitious long term financial model,” said Jean Hu, EVP, CFO and treasurer, AMD. “The performance based structure also tightly aligns AMD and Meta around execution and long term value creation.”
Diversifying Compute To Deliver Personal Superintelligence
The AMD agreement forms part of Meta’s broader Meta Compute strategy, which aims to build a diversified and resilient AI infrastructure stack.
“We’re excited to form a long term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence,” said Mark Zuckerberg, founder and CEO of Meta. “This is an important step for Meta as we diversify our compute. I expect AMD to be an important partner for many years to come.”
Topics For More Insights
Meta has pledged to invest at least $600 billion in U.S. data centers and AI infrastructure over the coming years, including projected capital expenditures of $135 billion in 2026. It also unveiled plans for a $10 billion gas powered data center campus in Indiana designed to support 1 gigawatt of compute capacity.
The company also recently signed a separate multi-year deal to expand its data centers with millions of NVIDIA CPUs and GPUs, while also advancing its own Meta Training and Inference Accelerator silicon program.
With up to 6 gigawatts of AMD GPUs committed and a structured equity alignment in place, Meta is building out one of the most aggressive AI infrastructure roadmaps in the industry.


Join The Discussion