How much does Cherry Servers charge for outbound data transfer?

💡 Answer

Cherry Servers handles data transfer fees as follows:

This matters particularly for teams running distributed training across providers or serving models via API endpoints that return large payloads. Zero or low egress fees can significantly reduce overall costs for production inference deployments.

Available storage: NVMe SSD, Elastic Block Storage ($0.071/GB/mo).

Check the full bandwidth and data transfer pricing on Cherry Servers official website.

More FAQs about Cherry Servers

Guides Where Cherry Servers Is Featured

These guides include Cherry Servers alongside other cloud GPU providers, grouped by hardware, pricing, features, and infrastructure.

Cherry Servers vs Vultr vs Massed Compute - GPU Provider Comparison (April 2026)

Side-by-side comparison of Cherry Servers vs Vultr vs Massed Compute. Quickly scan maximum funding, profit splits, risk rules, leverage, platforms, instruments, payout schedules, payment options, trading permissions and KYC restrictions to narrow down your prop trading firm shortlist. Data updated April 2026.

Cherry Servers vs Vultr vs Massed Compute - GPU Provider Comparison (April 2026)
Cherry Servers
Bare metal GPU servers with 24 years of hosting experience and full hardware-level control.
Vultr
High-performance cloud GPU across 32 global regions
Massed Compute
GPU cloud with direct engineer support
Overview
Trustpilot Rating 4.6 1.8 0
Headquarters Lithuania United States United States
Provider Type N/A Multi-Cloud GPU-Focused
Best For AI training inference fine-tuning rendering research HPC generative AI deep learning AI training inference video rendering HPC Stable Diffusion game development generative AI fine-tuning research AI training inference VFX rendering generative AI fine-tuning HPC Stable Diffusion research
GPU Hardware
GPU Models A100 A40 A16 A10 A2 Tesla P4 A16 A40 L40S A100 PCIe GH200 A100 SXM H100 SXM B200 B300 MI300X MI325X MI355X A30 RTX A5000 RTX A6000 L40S A100 SXM H100 PCIe H100 SXM H100 NVL RTX PRO 6000 H200 NVL
Max VRAM (GB) 80 288 141
Max GPUs/Instance 2 16 8
Interconnect PCIe NVLink NVLink
Pricing
Starting Price ($/hr) $0.16/hr $0.47/hr $0.35/hr
Billing Granularity Per-hour Per-hour Per-minute
Spot/Preemptible 0 1 0
Reserved Discounts N/A N/A N/A
Free Credits None Up to $300 free credit for 30 days None
Egress Fees N/A Standard (varies by plan) None
Storage NVMe SSD, Elastic Block Storage ($0.071/GB/mo) 350 GB - 61 TB NVMe (included), Block Storage at $0.10/GB/mo, S3-compatible Object Storage Local NVMe included with instances
Infrastructure
Regions Lithuania, Netherlands, Germany, Sweden, US, Singapore (6 locations) 32 regions across 6 continents (Americas, Europe, Asia, Australia, Africa) United States (Tier III data centers)
Uptime SLA 99.97% 100% Tier III (99.98% design)
Developer Experience
Frameworks PyTorch TensorFlow CUDA (bare metal — full stack control) PyTorch TensorFlow CUDA cuDNN ROCm Hugging Face NVIDIA NGC PyTorch TensorFlow CUDA cuDNN ComfyUI pre-configured ML templates
Docker Support 1 1 1
SSH Access 1 1 1
Jupyter Notebooks 0 1 0
API / CLI 1 1 1
Setup Time Minutes Minutes Minutes
Kubernetes Support 1 1 0
Business Terms
Min Commitment None None None
Compliance ISO 27001 ISO 20000-1 GDPR PCI DSS SOC 2+ (HIPAA) PCI ISO 27001 ISO 27017 ISO 27018 ISO 20000-1 CSA STAR Level 1 SOC 2 Type II HIPAA