AMD Instinct MI350X vs NVIDIA Tesla V100 — GPU-Vergleich (Apr 2026)
AMD Instinct MI350X (288GB HBM3e, 1,800 TFLOPS FP16, CDNA 4) vs NVIDIA Tesla V100 (16GB HBM2, 125 TFLOPS FP16, Volta). Cloud pricing: NVIDIA Tesla V100 from $0.13/hr. Compare specs, VRAM, performance, and pricing across 1 cloud providers to find the best GPU for your AI workload.
|
AMD Instinct MI350X
288GB HBM3e · CDNA 4
|
NVIDIA Tesla V100
16GB HBM2 · Volta
|
||
|---|---|---|---|
| Spezifikationen | |||
| Hersteller | AMD | NVIDIA | |
| Architektur | CDNA 4 | Volta | |
| VRAM | 288 GB HBM3e | 16 GB HBM2 | |
| Bandbreite | 8,000 GB/s | 900 GB/s | |
| FP16 (Tensor) | 1800.0 TFLOPS | 125.0 TFLOPS | |
| FP32 | 72.0 TFLOPS | 15.7 TFLOPS | |
| TDP | 1000W | 300W | |
| Erscheinungsjahr | 2025 | 2017 | |
| Segment | Data center | Data center | |
| Am besten geeignet für | Next-gen AI training inference at scale | Legacy training inference HPC | |
| Cloud-Preise | |||
| Günstigste On-Demand | — | $0.13/hr | |
| Günstigste Spot | — | — | |
| Anbieter | 1 | 1 | |
| Anbieterpreise (On-Demand) | |||
|
|
Nicht verfügbar | $0.13/hr | |