By VRAM
- 12 guides available
- Open a guide to see matched GPU models
- Jump to live cloud pricing for any GPU
Best 12+ GB VRAM Cloud GPUs — May 2026
Every cloud GPU with at least 12 GB of VRAM — the floor for running modern small-to-mid LLMs, image-gen, and most fine-tuning jobs.
Best 141+ GB VRAM Cloud GPUs — May 2026
141 GB+ VRAM — H200-class and above. The minimum for serving Llama-3.1 405B or DeepSeek-V3 on a single node.
Best 16+ GB VRAM Cloud GPUs — May 2026
Cloud GPUs with 16 GB+ VRAM — comfortable for SDXL inference, 7B-13B model fine-tuning, and most production inference workloads.
Best 192+ GB VRAM Cloud GPUs — May 2026
192 GB+ VRAM — Blackwell-class and MI300X. Maximum on-device capacity per GPU for trillion-parameter regime workloads.
Best 24+ GB VRAM Cloud GPUs — May 2026
GPUs with 24 GB+ VRAM unlock 13B-30B model inference, larger batch sizes, and longer context windows.
Best 256+ GB VRAM Cloud GPUs — May 2026
256 GB+ VRAM — frontier AI training territory. MI325X, MI350X, MI355X, B300, GB200.
Best 288+ GB VRAM Cloud GPUs — May 2026
288 GB+ VRAM — the absolute top tier of single-GPU memory capacity available today.
Best 32+ GB VRAM Cloud GPUs — May 2026
GPUs with 32 GB+ VRAM — the entry point for serious training and 30B+ model fine-tuning without sharding.
Best 48+ GB VRAM Cloud GPUs — May 2026
48 GB+ VRAM is the sweet spot for fine-tuning 30B-70B models on a single GPU and for production multi-tenant inference.
Best 64+ GB VRAM Cloud GPUs — May 2026
64 GB+ VRAM — covers premium professional workloads and the larger data-center class GPUs.
Best 80+ GB VRAM Cloud GPUs — May 2026
80 GB+ VRAM is the default for frontier AI training (A100 80GB, H100, H200, B200, MI300X). Compare every option side-by-side.
Best 96+ GB VRAM Cloud GPUs — May 2026
96 GB+ VRAM — for 70B+ model training without sharding and for multi-GPU inference of the largest open models.