AWS and Meta Strengthen AI Infrastructure Partnership with Custom Chip Deployment

Meta  has signed an agreement to deploy Amazon Web Services (AWS)  Graviton processors at scale, marking a major expansion of the companies’ long-standing partnership as ......

AWS and Meta Strengthen AI Infrastructure Partnership with Custom Chip Deployment

Meta  has signed an agreement to deploy Amazon Web Services (AWS)  Graviton processors at scale, marking a major expansion of the companies’ long-standing partnership as Meta accelerates development of its next-generation artificial intelligence systems.

The deployment will begin with tens of millions of Graviton cores, with capacity to scale further as Meta’s AI requirements expand. The deal reflects a broader shift in AI infrastructure design, where GPUs remain central to model training, while CPU-intensive workloads—such as real-time reasoning, search, code generation, and multi-step task orchestration—are becoming increasingly important with the rise of agentic AI systems.

AWS says its latest Graviton5 chips are designed specifically for these workloads, offering higher efficiency and performance for large-scale processing demands. The chips will support a wide range of Meta’s AI-related operations, enabling billions of interactions and complex workflows across distributed systems.

The Graviton5 architecture features 192 cores and a cache five times larger than its predecessor, improving inter-core communication latency by up to 33%, which enhances data processing speed and bandwidth for AI systems that require continuous reasoning and coordination.

Built on the AWS Nitro System, Graviton chips offer high performance, security, and availability, along with support for Elastic Fabric Adapter (EFA) for low-latency, high-bandwidth communication between instances—critical for large-scale AI workloads.

As Meta expands its AI infrastructure, the company is increasingly relying on diversified compute sources to support its growing needs.

“This isn’t just about chips; it’s about giving customers the infrastructure foundation, as well as data and inference services, to build AI that understands, anticipates, and scales efficiently to billions of people worldwide. Meta’s expanded partnership, deploying tens of millions of Graviton cores, shows what happens when you combine purpose-built silicon with the full AWS AI stack to power the next generation of agentic AI.”

Nafea Bshara, Vice President and Distinguished Engineer, Amazon

“As we scale the infrastructure behind Meta’s AI ambitions, diversifying our compute sources is a strategic imperative. AWS has been a trusted cloud partner for years, and expanding to Graviton allows us to run the CPU-intensive workloads behind agentic AI with the performance and efficiency we need at our scale,”

Santosh Janardhan, Head of infrastructure, Meta

AWS also highlighted the energy efficiency advantages of Graviton5, which is built on 3-nanometer process technology. The company says the chip delivers up to 25% better performance than the previous generation while improving energy efficiency, supporting both cost optimization and sustainability goals.

The partnership signals a deeper integration between the companies as demand for AI compute infrastructure accelerates globally, and highlights the growing importance of custom silicon in powering next-generation AI systems.