NVIDIA H100 GPU prices split: Cheap for some, very expensive for others

NVIDIA H100 GPU prices have split into two groups. Some cost only $1.39 per hour, while others cost up to $98 per hour. This is a big difference.

The GPU rental market has decoupled. A raw NVIDIA H100 can now be sourced for as little as $1.39/hr on decentralized marketplaces, while major hyperscalers—AWS, Azure, and Google Cloud—maintain price points that can exceed $98/hr. This discrepancy is not a simple pricing error; it marks a fundamental split between utility-grade compute and enterprise-integrated infrastructure.

The Great GPU Shortage – Rental Capacity – Launching our H100 1 Year Rental Price ... - 1
Market SegmentRepresentative ProvidersTypical Price Range ($/hr)Core Value Prop
Marketplace/SpecializedVast.ai, RunPod, Together.ai$1.39 – $4.25Raw compute, low overhead
HyperscalersAWS, Azure, GCP$30.00 – $98.00+Compliance, SLAs, Ecosystem

The Death of a Single Market Price

The concept of a "market price" for an H100 is an illusion. Data suggests that on-demand rental capacity for high-tier GPUs is effectively exhausted, as those currently holding capacity are refusing to relinquish it despite price volatility.

The Great GPU Shortage – Rental Capacity – Launching our H100 1 Year Rental Price ... - 2
  • The Marketplace Tier: These platforms treat compute as a volatile, raw asset. Prices are responsive, thin, and subject to the availability of decentralized hardware nodes.

  • The Hyperscaler Tier: These entities are no longer selling "chips"; they are selling risk mitigation. Organizations paying the ~50x premium are purchasing compliance certifications, legal indemnities, and deep integration with proprietary software ecosystems.

Why the Gap Persists

Observers often mistake the cost of hardware access for the cost of the service wrapper. Specialized providers offer "pure compute" at a lower price point, but they lack the heavy scaffolding required by massive, risk-averse institutions.

The Great GPU Shortage – Rental Capacity – Launching our H100 1 Year Rental Price ... - 3
  • Egress and Ancillary Costs: Hyperscalers often bake an additional 20% to 40% in hidden fees—egress charges, networking, and storage—into their bottom lines.

  • The Compliance Premium: For the enterprise, the cost of an H100 includes the implicit cost of a Service Level Agreement (SLA). If a workload crashes in a cheap marketplace, the loss is internal. If it crashes on a hyperscaler, there is a contractual remedy.

  • Utilization as Signal: In the on-demand space, utilization—not price—is the only high-frequency indicator of true demand. When availability vanishes, prices don't just shift; they effectively lock up.

Background: The Architecture of Scarcity

The H100 entered the global market during a period of acute supply contraction. This created a legacy of "pricing opacity" where providers utilized vastly different models to manage scarcity.

Read More: AI Shows England's Productivity Gaps Are More Complex Than North-South Divide

The Great GPU Shortage – Rental Capacity – Launching our H100 1 Year Rental Price ... - 4

Current analysis indicates a trend toward a hybrid strategy: teams are training models on specialized, high-performance clusters to save capital, while pushing production inference workloads into hyperscale environments that prioritize uptime and reliability over the raw cost per clock cycle. The market is not "crashing" in a traditional sense; it is segregating into two distinct realities—one defined by the cost of electricity and hardware, the other by the cost of corporate security.

Frequently Asked Questions

Q: Why are NVIDIA H100 GPUs priced so differently on different websites?
NVIDIA H100 GPUs have two main prices. On simple rental sites, they cost about $1.39 to $4.25 per hour. On big cloud sites like AWS, Azure, and Google Cloud, they cost much more, from $30 to over $98 per hour.
Q: What is the difference between cheap H100 GPUs and expensive H100 GPUs?
The cheaper H100 GPUs are like raw computer power. They are good for basic tasks but don't have many extra services. The expensive H100 GPUs from big cloud companies include extra services like guaranteed uptime, security, and help if something goes wrong.
Q: Who pays the higher price for H100 GPUs and why?
Big companies and businesses pay the higher price. They need extra services like security, legal promises (SLAs), and easy connection to other tools. These companies are willing to pay more to avoid problems and ensure their work runs smoothly.
Q: What does 'utility-grade compute' mean for H100 GPUs?
'Utility-grade compute' means getting just the computer power, like electricity. These are the cheaper H100 GPUs found on marketplaces. They are good for users who can handle technical issues themselves.
Q: What does 'enterprise-integrated infrastructure' mean for H100 GPUs?
'Enterprise-integrated infrastructure' means getting a full package. This includes the H100 GPU plus security, support, and easy use with other business software. These are the more expensive H100 GPUs offered by big cloud providers.
Q: How are companies using H100 GPUs differently now?
Some companies are using the cheaper H100 GPUs on special sites to train AI models because it costs less. Then, they use the more expensive H100 GPUs on big cloud sites for running their AI programs live, where reliability is very important.