Does Massed Compute charge for downloading model weights or training outputs?

💡 Answer

Data transfer pricing at Massed Compute is as follows: None

For AI/ML workloads, egress costs can add up quickly when exporting large model checkpoints, serving predictions at scale, or syncing data across regions. Understanding the egress pricing structure at Massed Compute is essential for accurately estimating total cost of ownership.

Storage: Local NVMe included with instances

View detailed egress pricing per region at Massed Compute official website.

More FAQs about Massed Compute

Guides Where Massed Compute Is Featured

These guides include Massed Compute alongside other cloud GPU providers, grouped by hardware, pricing, features, and infrastructure.

Massed Compute vs RunPod vs Vast.ai - GPU Provider Comparison (March 2026)

Side-by-side comparison of Massed Compute vs RunPod vs Vast.ai. Quickly scan maximum funding, profit splits, risk rules, leverage, platforms, instruments, payout schedules, payment options, trading permissions and KYC restrictions to narrow down your prop trading firm shortlist. Data updated March 2026.

Massed Compute vs RunPod vs Vast.ai - GPU Provider Comparison (March 2026)
Massed Compute
GPU cloud with direct engineer support
RunPod
The cloud built for AI — deploy and scale GPU workloads from serverless inference to instant multi-node clusters on demand.
Vast.ai
Instant GPUs. Transparent Pricing.
Overview
Trustpilot Rating 0 3.8 4.4
Headquarters United States United States United States
Provider Type GPU-Focused GPU-Focused GPU Marketplace
Best For AI training inference VFX rendering generative AI fine-tuning HPC Stable Diffusion research AI training inference fine-tuning Stable Diffusion batch processing rendering research LLM serving generative AI AI training inference fine-tuning Stable Diffusion batch processing research LLM serving generative AI
GPU Hardware
GPU Models A30 RTX A5000 RTX A6000 L40S A100 SXM H100 PCIe H100 SXM H100 NVL RTX PRO 6000 H200 NVL B300 B200 H200 H100 SXM H100 PCIe H100 NVL MI300X A100 SXM A100 PCIe RTX 5090 RTX PRO 6000 L40S L40 RTX 6000 Ada RTX 5000 Ada RTX A6000 RTX A5000 RTX 4090 RTX 4080 SUPER RTX 4080 RTX 4070 Ti RTX 3090 Ti RTX 3090 RTX 3080 Ti RTX 3080 RTX 3070 A40 A30 A2 L4 B200 H200 H100 SXM H100 NVL A100 SXM A100 PCIe RTX 5090 RTX 5080 RTX 5070 Ti RTX 6000 Pro RTX 6000 Ada RTX 4500 Ada RTX A6000 RTX A5000 RTX A4000 L40S L40 A40 A10 RTX 4090 RTX 4080 RTX 4070 Ti RTX 4070 RTX 4060 Ti RTX 4060 RTX 3090 Ti RTX 3090 RTX 3080 Ti RTX 3080 RTX 3070 Ti RTX 3070 Tesla V100 Tesla T4 A2 GTX 1080
Max VRAM (GB) 141 288 192
Max GPUs/Instance 8 8 8
Interconnect NVLink NVLink NVLink, InfiniBand
Pricing
Starting Price ($/hr) $0.35/hr $0.06/hr $0.06/hr
Billing Granularity Per-minute Per-second Per-second
Spot/Preemptible 0 1 1
Reserved Discounts N/A 15-29% (1-month to 1-year plans) Up to 50% (1-6 month reserved)
Free Credits None $5-$500 bonus after first $10 spend Small test credit on signup
Egress Fees None None (Free) Varies by host ($/TB)
Storage Local NVMe included with instances Container/Volume ($0.10/GB/mo), Idle Volume ($0.20/GB/mo), Network Storage ($0.07/GB/mo 1TB) Varies by host ($/GB/hr, charged while instance exists)
Infrastructure
Regions United States (Tier III data centers) 31 global regions 500+ locations, 40+ data centers
Uptime SLA Tier III (99.98% design) 99.99% No formal SLA (host reliability scores visible)
Developer Experience
Frameworks PyTorch TensorFlow CUDA cuDNN ComfyUI pre-configured ML templates PyTorch TensorFlow JAX ONNX CUDA PyTorch TensorFlow CUDA vLLM ComfyUI
Docker Support 1 1 1
SSH Access 1 1 1
Jupyter Notebooks 0 1 1
API / CLI 1 1 1
Setup Time Minutes Instant Seconds
Kubernetes Support 0 0 0
Business Terms
Min Commitment None None None
Compliance SOC 2 Type II HIPAA SOC 2 Type II SOC 2 Type 2 HIPAA GDPR CCPA