Amazon Inks $38 Billion Deal with OpenAI to Supply NVIDIA Chips: Details of the Seven-Year Contract and Its Market Impact

 Amazon Web Services (AWS), the cloud computing arm of e-commerce giant Amazon, has announced a massive $38 billion deal with OpenAI, the developer of the popular ChatGPT chatbot. Under the terms of the seven-year contract, AWS will supply OpenAI with hundreds of thousands of NVIDIA graphics processing units (GPUs) to meet the rapidly growing demand for computational power required to train and run artificial intelligence models.

NVIDIA GPUs—particularly the H100, H200, and upcoming Blackwell (B200) series—have become the de facto standard in generative AI due to their CUDA architecture and optimized libraries for parallel computing. Analysts estimate that even a single model like GPT-4o requires tens of thousands of GPUs simultaneously for inference (response generation), while training next-generation versions demands hundreds of thousands.

AWS, which already holds over 31% of the cloud market, has invested billions in data centers featuring liquid cooling and high-speed NVLink and InfiniBand interconnects. The new agreement includes:

  • Phased delivery of up to 500,000 GPUs over seven years;
  • Priority access for OpenAI to new NVIDIA chips immediately upon release;
  • Flexible pricing based on workload (on-demand, reserved instances, spot instances);
  • Integration with AWS Trainium—Amazon’s in-house AI training chips—as a long-term alternative to NVIDIA.

Immediately following the announcement, Amazon (AMZN) shares on NASDAQ rose 6.2%—from $188 to $199.7 per share—adding more than $120 billion to the company’s market capitalization in a single day. Morgan Stanley analysts called the deal a “strategic breakthrough” that strengthens AWS’s position in the AI Cloud segment and partially offsets competition from Microsoft Azure.

NVIDIA (NVDA) shares climbed 3.1%, though the gain was more muted due to already high market expectations. Year-to-date in 2025, NVIDIA stock has risen 145%, pushing its market capitalization above $3.8 trillion.

Despite the massive AWS deal, OpenAI continues to diversify its compute providers. The company currently partners with:

Provider Type of Collaboration Estimated Capacity
Microsoft Azure Primary partner ($13 billion investment) ~1 million GPUs (2024–2026)
Oracle Cloud Backup clusters ~100,000 GPUs
Google Cloud TPU v5 for research ~50,000 TPUs
CoreWeave Specialized GPU clusters ~200,000 GPUs
AWS New $38 billion deal Up to 500,000 GPUs

This approach allows OpenAI to avoid vendor lock-in, optimize costs, and rapidly scale new models (such as GPT-5, expected in 2026).

Long-Term Industry Implications
  1. NVIDIA chip shortage may worsen: demand already exceeds supply by 40–50%.
  2. Talent competition: AWS and OpenAI will jointly launch an AI Research Hub in Seattle for 500 engineers.
  3. Environmental impact: the clusters will consume up to 1.5 GW of power—equivalent to a small city. AWS pledges 100% renewable energy offset by 2030.

The Amazon–OpenAI deal is more than a chip supply contract—it is a strategic alliance shaping the architecture of future artificial intelligence. In a world where computational power is the new oil, such partnerships will determine who leads the AI race in the coming decade.

Source: AWS and OpenAI press releases, NASDAQ data, Bloomberg Intelligence analysis.