Can I run distributed training across multiple GPUs at Cherry Servers?

💡 Answer

Here is how Cherry Servers handles multi-GPU workloads:

GPU interconnect: PCIe
Maximum GPUs: 2 per instance
Multi-node support: 0

When scaling beyond a single GPU, the interconnect technology determines how efficiently GPUs communicate during distributed training operations like all-reduce and gradient synchronization. Cherry Servers offers PCIe connectivity, which is an important factor when comparing multi-GPU providers for large-scale AI workloads.

Check available multi-node cluster configurations on Cherry Servers official website.

More FAQs about Cherry Servers

Guides Where Cherry Servers Is Featured

These guides include Cherry Servers alongside other cloud GPU providers, grouped by hardware, pricing, features, and infrastructure.

Cherry Servers vs Vast.ai vs DigitalOcean - GPU Provider Comparison (March 2026)

Side-by-side comparison of Cherry Servers vs Vast.ai vs DigitalOcean. Quickly scan maximum funding, profit splits, risk rules, leverage, platforms, instruments, payout schedules, payment options, trading permissions and KYC restrictions to narrow down your prop trading firm shortlist. Data updated March 2026.

Cherry Servers vs Vast.ai vs DigitalOcean - GPU Provider Comparison (March 2026)
Cherry Servers
Bare metal GPU servers with 24 years of hosting experience and full hardware-level control.
Vast.ai
Instant GPUs. Transparent Pricing.
DigitalOcean
Simple, scalable GPU cloud for AI/ML
Overview
Trustpilot Rating 4.6 4.4 4.6
Headquarters Lithuania United States United States
Provider Type N/A GPU Marketplace N/A
Best For AI training inference fine-tuning rendering research HPC generative AI deep learning AI training inference fine-tuning Stable Diffusion batch processing research LLM serving generative AI AI training inference fine-tuning LLM deployment LLM serving computer vision startups generative AI research
GPU Hardware
GPU Models A100 A40 A16 A10 A2 Tesla P4 B200 H200 H100 SXM H100 NVL A100 SXM A100 PCIe RTX 5090 RTX 5080 RTX 5070 Ti RTX 6000 Pro RTX 6000 Ada RTX 4500 Ada RTX A6000 RTX A5000 RTX A4000 L40S L40 A40 A10 RTX 4090 RTX 4080 RTX 4070 Ti RTX 4070 RTX 4060 Ti RTX 4060 RTX 3090 Ti RTX 3090 RTX 3080 Ti RTX 3080 RTX 3070 Ti RTX 3070 Tesla V100 Tesla T4 A2 GTX 1080 RTX 4000 Ada RTX 6000 Ada L40S MI300X H100 SXM H200
Max VRAM (GB) 80 192 192
Max GPUs/Instance 2 8 8
Interconnect PCIe NVLink, InfiniBand NVLink
Pricing
Starting Price ($/hr) $0.16/hr $0.06/hr $0.76/hr
Billing Granularity Per-hour Per-second Per-second
Spot/Preemptible 0 1 0
Reserved Discounts N/A Up to 50% (1-6 month reserved) N/A
Free Credits None Small test credit on signup $200 free credit for 60 days
Egress Fees N/A Varies by host ($/TB) None (included in plan)
Storage NVMe SSD, Elastic Block Storage ($0.071/GB/mo) Varies by host ($/GB/hr, charged while instance exists) 500-720 GiB NVMe boot (included), 5 TiB NVMe scratch on larger configs, Volumes at $0.10/GiB/mo
Infrastructure
Regions Lithuania, Netherlands, Germany, Sweden, US, Singapore (6 locations) 500+ locations, 40+ data centers New York (NYC2), Toronto (TOR1), Atlanta (ATL1), Richmond (RIC1), Amsterdam (AMS3)
Uptime SLA 99.97% No formal SLA (host reliability scores visible) 99%
Developer Experience
Frameworks PyTorch TensorFlow CUDA (bare metal — full stack control) PyTorch TensorFlow CUDA vLLM ComfyUI PyTorch TensorFlow Jupyter Miniconda CUDA ROCm Hugging Face
Docker Support 1 1 1
SSH Access 1 1 1
Jupyter Notebooks 0 1 1
API / CLI 1 1 1
Setup Time Minutes Seconds Minutes
Kubernetes Support 1 0 1
Business Terms
Min Commitment None None None
Compliance ISO 27001 ISO 20000-1 GDPR PCI DSS SOC 2 Type 2 HIPAA GDPR CCPA SOC 2 Type II SOC 3 HIPAA (with BAA) CSA STAR Level 1