Which NVIDIA and AMD GPUs are available at RunPod?
💡 Answer
As of April 9, 2026, RunPod provides access to the following GPU models:
B300, B200, H200, H100 SXM, H100 PCIe, H100 NVL, MI300X, A100 SXM, A100 PCIe, RTX 5090, RTX PRO 6000, L40S, L40, RTX 6000 Ada, RTX 5000 Ada, RTX A6000, RTX A5000, RTX 4090, RTX 4080 SUPER, RTX 4080, RTX 4070 Ti, RTX 3090 Ti, RTX 3090, RTX 3080 Ti, RTX 3080, RTX 3070, A40, A30, A2, L4
For memory-intensive workloads such as large language model training, the highest VRAM option at RunPod is 288 GB. Multi-GPU instances support up to 8 GPUs with NVLink interconnect for efficient parallel computation.
See which GPU models are currently in stock at RunPod official website.
More FAQs about RunPod
- Who should use RunPod for cloud GPU?
- What is the current Trustpilot rating and number of reviews for RunPod?
- Does RunPod come with PyTorch, TensorFlow, or JAX pre-installed?
- Does RunPod support Docker, SSH, and Jupyter Notebooks?
- Can I run GPU workloads on RunPod without managing servers?
- What regions does RunPod operate in?
- What interconnect technology does RunPod use for multi-GPU training?
- Can I get discounted GPU rates at RunPod through spot instances?
- Are there any data transfer costs at RunPod?
- Can I try RunPod for free before committing?
- How much does RunPod cost per hour for GPU instances?
Guides Where RunPod Is Featured
- Best Cloud GPU Providers with NVIDIA RTX 3090
- Best Cloud GPUs for Generative AI
- Cheapest Cloud GPUs Under $0.50/hr
- Cloud GPU Providers with API & CLI Management
- Cloud GPU Providers with Docker & Custom Images
- Cloud GPU Providers with Free Credits
- Cloud GPU Providers with Jupyter Notebook Support
- Cloud GPU Providers with Kubernetes Support
- Cloud GPU Providers with Multi-Node GPU Clusters
- Cloud GPU Providers with NVLink or InfiniBand
- Cloud GPU Providers with Per-Second Billing
- Cloud GPU Providers with Persistent Storage
- Cloud GPU Providers with Serverless GPU Inference
- Cloud GPU Providers with Spot / Preemptible Instances
- Cloud GPU Providers with SSH Access
- Cloud GPU Providers with Zero Egress Fees
These guides include RunPod alongside other cloud GPU providers, grouped by hardware, pricing, features, and infrastructure.
RunPod vs Massed Compute vs Latitude.sh - GPU Provider Comparison (April 2026)
Side-by-side comparison of RunPod vs Massed Compute vs Latitude.sh. Quickly scan maximum funding, profit splits, risk rules, leverage, platforms, instruments, payout schedules, payment options, trading permissions and KYC restrictions to narrow down your prop trading firm shortlist. Data updated April 2026.
|
RunPod
The cloud built for AI — deploy and scale GPU workloads from serverless inference to instant multi-node clusters on demand.
|
Massed Compute
GPU cloud with direct engineer support
|
Latitude.sh
Bare metal GPU cloud across 23 global locations
|
|
|---|---|---|---|
| Overview | |||
| Trustpilot Rating | 3.8 | 0 | 3.7 |
| Headquarters | United States | United States | Brazil |
| Provider Type | GPU-Focused | GPU-Focused | Bare Metal |
| Best For | AI training inference fine-tuning Stable Diffusion batch processing rendering research LLM serving generative AI | AI training inference VFX rendering generative AI fine-tuning HPC Stable Diffusion research | AI training inference bare metal GPU fine-tuning research dedicated workloads generative AI |
| GPU Hardware | |||
| GPU Models | B300 B200 H200 H100 SXM H100 PCIe H100 NVL MI300X A100 SXM A100 PCIe RTX 5090 RTX PRO 6000 L40S L40 RTX 6000 Ada RTX 5000 Ada RTX A6000 RTX A5000 RTX 4090 RTX 4080 SUPER RTX 4080 RTX 4070 Ti RTX 3090 Ti RTX 3090 RTX 3080 Ti RTX 3080 RTX 3070 A40 A30 A2 L4 | A30 RTX A5000 RTX A6000 L40S A100 SXM H100 PCIe H100 SXM H100 NVL RTX PRO 6000 H200 NVL | A30 RTX A5000 RTX A6000 L40S RTX 6000 Ada A100 SXM H100 SXM GH200 RTX PRO 6000 |
| Max VRAM (GB) | 288 | 141 | 96 |
| Max GPUs/Instance | 8 | 8 | 8 |
| Interconnect | NVLink | NVLink | NVLink |
| Pricing | |||
| Starting Price ($/hr) | $0.06/hr | $0.35/hr | $0.35/hr |
| Billing Granularity | Per-second | Per-minute | Per-hour |
| Spot/Preemptible | 1 | 0 | 0 |
| Reserved Discounts | 15-29% (1-month to 1-year plans) | N/A | N/A |
| Free Credits | $5-$500 bonus after first $10 spend | None | $200 via referral program |
| Egress Fees | None (Free) | None | None |
| Storage | Container/Volume ($0.10/GB/mo), Idle Volume ($0.20/GB/mo), Network Storage ($0.07/GB/mo 1TB) | Local NVMe included with instances | Local NVMe included (up to 4x 3.8TB), Block Storage $0.10/GB/mo, Filesystem Storage $0.05/GB/mo |
| Infrastructure | |||
| Regions | 31 global regions | United States (Tier III data centers) | 23 locations: US (8 cities), LATAM (5), Europe (5), APAC (4), Mexico City. GPU in Dallas, Frankfurt, Sydney, Tokyo |
| Uptime SLA | 99.99% | Tier III (99.98% design) | 99.9% |
| Developer Experience | |||
| Frameworks | PyTorch TensorFlow JAX ONNX CUDA | PyTorch TensorFlow CUDA cuDNN ComfyUI pre-configured ML templates | ML-optimized images PyTorch TensorFlow (user-installed) CUDA |
| Docker Support | 1 | 1 | 1 |
| SSH Access | 1 | 1 | 1 |
| Jupyter Notebooks | 1 | 0 | 0 |
| API / CLI | 1 | 1 | 1 |
| Setup Time | Instant | Minutes | Seconds |
| Kubernetes Support | 0 | 0 | 0 |
| Business Terms | |||
| Min Commitment | None | None | None |
| Compliance | SOC 2 Type II | SOC 2 Type II HIPAA | Single-tenant isolation DPA available |