Giants Meta and OpenAI Pledge Gigawatts of AMD Power
Two major technology players, Meta and OpenAI, have committed to deploying substantial amounts of AMD's graphical processing units (GPUs) for their artificial intelligence (AI) infrastructure. The scale of these commitments, totaling 6 gigawatts (GW) of power for AI hardware, signifies a significant shift in the AI hardware landscape and a major win for AMD.
OpenAI's agreement, announced in October 2025, details a multi-year, multi-generation partnership wherein the AI research lab will deploy 6 GW of AMD GPUs. The initial phase, slated for the latter half of 2026, will utilize AMD's upcoming Instinct MI450 series GPUs, aiming for the first gigawatt of capacity then. This collaboration positions AMD as a foundational compute partner for OpenAI's expanding needs, intended to accelerate the development and deployment of large-scale generative AI models.
Simultaneously, Meta has solidified its own long-term AI infrastructure deal with AMD, also targeting 6 GW of AMD Instinct GPUs. Announced in February 2026, this agreement builds upon an existing relationship, with Meta already deploying millions of AMD EPYC processors alongside MI300 and MI350 series GPUs. The new pact involves collaboration on silicon, systems, and software roadmaps, aiming for integrated AI platforms tailored to Meta's specific workloads. The first deployments under this expanded partnership will feature a custom AMD Instinct GPU based on the MI450 architecture.
Read More: LLM Performance Plateau Means Less Big Jumps, More Small Helps
Implications and Market Stir
The sheer volume of these GPU commitments is seen as a substantial boost for AMD, with market reactions including a 10 percent jump in AMD's stock price following the Meta announcement. These deals suggest AMD's strategy of offering high-performance computing with a focus on cost efficiency, particularly appealing for inference workloads, is gaining traction.
The pacts highlight AMD's efforts to challenge the dominance of existing players in the AI chip market, presenting a viable alternative by emphasizing an open and scalable architecture, robust software platforms like ROCm, and competitive pricing. For entities involved in AI development, these commitments serve as a signal regarding the infrastructure realities influencing model quality and deployment strategies.
Read More: Anthropic Sues Pentagon Over "Supply Chain Risk" AI Ban in California
Background: The AI Infrastructure Race
The increasing demand for computational power to train and run advanced AI models has fueled a fierce race for hardware solutions. Companies are seeking scalable, efficient, and cost-effective ways to meet these growing needs.
OpenAI's pursuit of cutting-edge AI advancement necessitates massive compute capacity.
Meta's need for scalable power to support its AI workloads and deliver global AI experiences drives similar infrastructure requirements.
AMD's positioning in this market appears to be leveraging its high-performance data center GPUs and a strategic approach to software and system integration.
The agreements between AMD, Meta, and OpenAI represent significant technology collaborations, poised to shape the future of AI infrastructure and potentially generate substantial revenue for AMD over the coming years.