Rent NVIDIA H100 SXM in the Cloud — Compare 7 Providers

The flagship data center GPU for AI training. Industry standard for large language model training with 80GB HBM3 memory and NVLink interconnects.

VRAM 80 GB HBM3
Bandwidth 3,350 GB/s
FP16 990.0 TFLOPS
FP32 67.0 TFLOPS
TDP 700W
Architecture Hopper
Cheapest On-Demand $1.57/hr
Average On-Demand $2.44/hr
Cheapest Spot $1.49/hr
Providers 7

Compare NVIDIA H100 SXM Cloud Pricing — 7 Providers

On-Demand

Provider Price / GPU / hr Availability Notes
Latitude.sh $1.57/hr Available GPU VM, NVLink Visit Provider ↗
Vultr $1.99/hr Available 24-month contract Visit Provider ↗
Vast.ai $2.20/hr Available Marketplace avg Visit Provider ↗
Massed Compute $2.35/hr Available PCIe Visit Provider ↗
Novita AI $2.59/hr Available Visit Provider ↗
RunPod $2.99/hr Available Secure Cloud, SXM Visit Provider ↗
DigitalOcean $3.39/hr Available HGX H100 Visit Provider ↗

Spot / Preemptible

Provider Price / GPU / hr Availability Notes
Novita AI $1.49/hr Available Spot, 1hr guaranteed Visit Provider ↗
RunPod $2.69/hr Available Community Cloud Visit Provider ↗

Reserved

Provider Price / GPU / hr Availability Notes
DigitalOcean $2.50/hr Available 12-month, 8-GPU Visit Provider ↗

Prices last verified: April 13, 2026

NVIDIA H100 SXM Technical Specifications

ManufacturerNVIDIA
ArchitectureHopper
VRAM80 GB HBM3
Memory Bandwidth3,350 GB/s
FP16 (Tensor)990.0 TFLOPS
FP3267.0 TFLOPS
TDP700W
Release Year2023
SegmentData center
Best ForLarge-scale AI training, distributed workloads, LLM pre-training