NVIDIA H100 SXM architecture and memory — deep dive
💡 Answer
The NVIDIA H100 SXM datasheet shows a Hopper-class card with 80 GB of HBM3 memory, 3,350 GB/s memory bandwidth, 990 TFLOPS of FP16 compute, 67 TFLOPS of FP32 compute, and a 700W thermal envelope. It reached market in 2023.
Those specs make it capable of handling the full modern AI stack: pre-model training moderate-sized models, fine-tuning most LLMs, running production real-time serving at useful batch sizes, and accelerating diffusion / image generation workloads.
Ready to deploy? Latitude.sh has NVIDIA H100 SXM from $1.57/hr. Vultr and Vast.ai also carry it.
More FAQs about NVIDIA H100 SXM
Novita AI vs Latitude.sh vs Vultr vs Vast.ai vs Massed Compute vs DigitalOcean - GPU Provider Comparison (April 2026)
Side-by-side comparison of Novita AI vs Latitude.sh vs Vultr vs Vast.ai vs Massed Compute vs DigitalOcean. Quickly compare GPU models, hourly pricing, spot instances, billing granularity, VRAM, regions, developer tools, Kubernetes support, and compliance to narrow down your cloud GPU provider shortlist. Data updated April 2026.
|
Novita AI
AI & Agent Cloud platform with 200+ model APIs, GPU instances, and serverless inference at scale.
|
Latitude.sh
Bare metal GPU cloud across 23 global locations
|
Vultr
High-performance cloud GPU across 32 global regions
|
Vast.ai
Instant GPUs. Transparent Pricing.
|
Massed Compute
GPU cloud with direct engineer support
|
DigitalOcean
Simple, scalable GPU cloud for AI/ML
|
|
|---|---|---|---|---|---|---|
| Overview | ||||||
| Trustpilot Rating | 3.3 | 3.7 | 1.8 | 4.4 | 0 | 4.6 |
| Headquarters | United States | Brazil | United States | United States | United States | United States |
| Provider Type | GPU-Focused | Bare Metal | Multi-Cloud | GPU Marketplace | GPU-Focused | N/A |
| Best For | AI training inference fine-tuning generative AI research LLM serving Stable Diffusion | AI training inference bare metal GPU fine-tuning research dedicated workloads generative AI | AI training inference video rendering HPC Stable Diffusion game development generative AI fine-tuning research | AI training inference fine-tuning Stable Diffusion batch processing research LLM serving generative AI | AI training inference VFX rendering generative AI fine-tuning HPC Stable Diffusion research | AI training inference fine-tuning LLM deployment LLM serving computer vision startups generative AI research |
| GPU Hardware | ||||||
| GPU Models | H100 SXM A100 SXM L40S RTX 4090 RTX 6000 Ada RTX 5090 RTX 3090 | A30 RTX A5000 RTX A6000 L40S RTX 6000 Ada A100 SXM H100 SXM GH200 RTX PRO 6000 | A16 A40 L40S A100 PCIe GH200 A100 SXM H100 SXM B200 B300 MI300X MI325X MI355X | B200 H200 H100 SXM H100 NVL A100 SXM A100 PCIe RTX 5090 RTX 5080 RTX 5070 Ti RTX 6000 Pro RTX 6000 Ada RTX 4500 Ada RTX A6000 RTX A5000 RTX A4000 L40S L40 A40 A10 RTX 4090 RTX 4080 RTX 4070 Ti RTX 4070 RTX 4060 Ti RTX 4060 RTX 3090 Ti RTX 3090 RTX 3080 Ti RTX 3080 RTX 3070 Ti RTX 3070 Tesla V100 Tesla T4 A2 GTX 1080 | A30 RTX A5000 RTX A6000 L40S A100 SXM H100 PCIe H100 SXM H100 NVL RTX PRO 6000 H200 NVL | RTX 4000 Ada RTX 6000 Ada L40S MI300X H100 SXM H200 |
| Max VRAM (GB) | 80 | 96 | 288 | 192 | 141 | 192 |
| Max GPUs/Instance | 8 | 8 | 16 | 8 | 8 | 8 |
| Interconnect | NVLink | NVLink | NVLink | NVLink, InfiniBand | NVLink | NVLink |
| Pricing | ||||||
| Starting Price ($/hr) | $0.11/hr | $0.35/hr | $0.47/hr | $0.06/hr | $0.35/hr | $0.76/hr |
| Billing Granularity | Per-second | Per-hour | Per-hour | Per-second | Per-minute | Per-second |
| Spot/Preemptible | Yes | No | Yes | Yes | No | No |
| Reserved Discounts | N/A | N/A | N/A | Up to 50% (1-6 month reserved) | N/A | N/A |
| Free Credits | Up to $10,000 for startups | $200 via referral program | Up to $300 free credit for 30 days | Small test credit on signup | None | $200 free credit for 60 days |
| Egress Fees | None (Free) | None | Standard (varies by plan) | Varies by host ($/TB) | None | None (included in plan) |
| Storage | Container disk (60GB free), volume disk, network volumes | Local NVMe included (up to 4x 3.8TB), Block Storage $0.10/GB/mo, Filesystem Storage $0.05/GB/mo | 350 GB - 61 TB NVMe (included), Block Storage at $0.10/GB/mo, S3-compatible Object Storage | Varies by host ($/GB/hr, charged while instance exists) | Local NVMe included with instances | 500-720 GiB NVMe boot (included), 5 TiB NVMe scratch on larger configs, Volumes at $0.10/GiB/mo |
| Infrastructure | ||||||
| Regions | US, EU, APAC, South America, Africa, Middle East (20+ locations) | 23 locations: US (8 cities), LATAM (5), Europe (5), APAC (4), Mexico City. GPU in Dallas, Frankfurt, Sydney, Tokyo | 32 regions across 6 continents (Americas, Europe, Asia, Australia, Africa) | 500+ locations, 40+ data centers | United States (Tier III data centers) | New York (NYC2), Toronto (TOR1), Atlanta (ATL1), Richmond (RIC1), Amsterdam (AMS3) |
| Uptime SLA | 99.9% | 99.9% | 100% | No formal SLA (host reliability scores visible) | Tier III (99.98% design) | 99% |
| Developer Experience | ||||||
| Frameworks | PyTorch TensorFlow CUDA cuDNN TensorRT | ML-optimized images PyTorch TensorFlow (user-installed) CUDA | PyTorch TensorFlow CUDA cuDNN ROCm Hugging Face NVIDIA NGC | PyTorch TensorFlow CUDA vLLM ComfyUI | PyTorch TensorFlow CUDA cuDNN ComfyUI pre-configured ML templates | PyTorch TensorFlow Jupyter Miniconda CUDA ROCm Hugging Face |
| Docker Support | Yes | Yes | Yes | Yes | Yes | Yes |
| SSH Access | Yes | Yes | Yes | Yes | Yes | Yes |
| Jupyter Notebooks | Yes | No | Yes | Yes | No | Yes |
| API / CLI | Yes | Yes | Yes | Yes | Yes | Yes |
| Setup Time | N/A | Seconds | Minutes | Seconds | Minutes | Minutes |
| Kubernetes Support | No | No | Yes | No | No | Yes |
| Business Terms | ||||||
| Min Commitment | None | None | None | None | None | None |
| Compliance | SOC 2 | Single-tenant isolation DPA available | SOC 2+ (HIPAA) PCI ISO 27001 ISO 27017 ISO 27018 ISO 20000-1 CSA STAR Level 1 | SOC 2 Type 2 HIPAA GDPR CCPA | SOC 2 Type II HIPAA | SOC 2 Type II SOC 3 HIPAA (with BAA) CSA STAR Level 1 |
Novita AI
Latitude.sh
Vultr
DigitalOcean