GPUs & Pricing
GPU Types
Compare GPU models available across all Aquanode providers
Different workloads require different GPUs. Below you can compare available GPU models, their specifications, and which providers offer them.
Availability varies by provider
Not all GPU models are available at all times. Actual availability depends on provider capacity and region. Check the Aquanode Marketplace for live inventory.
GPU Catalog
| GPU Model | Memory | Ideal Use Case | Providers |
|---|---|---|---|
| NVIDIA H100 | 80 GB | Large LLM training & inference | Voltage Park, DataCrunch, Akash, Hyperstack, Massed Compute, Cudo Compute |
| NVIDIA H200 | 141 GB | Next-gen LLM training | DataCrunch |
| NVIDIA B200 | 192 GB | Frontier LLM workloads | DataCrunch |
| NVIDIA A100 | 40/80 GB | Training & inference | Akash, DataCrunch, Vast.ai, Vultr, Massed Compute, Cudo Compute |
| AMD MI300x | 192 GB | Large model inference | Hot Aisle |
| AMD MI355x | 288 GB | Next-gen AMD workloads | Hot Aisle |
| RTX 6000 ADA | 48 GB | Inference, CV, smaller models | DataCrunch, Vast.ai, Massed Compute |
| RTX 4090 | 24 GB | Fine-tuning, inference | Voltage Park, DataCrunch, Akash, Vast.ai, Hyperstack |
| RTX 3090 | 24 GB | Fine-tuning, inference | Akash, DataCrunch, Vast.ai |
| L40 | 48 GB | Inference & rendering | Hyperstack, Vast.ai, Massed Compute, Cudo Compute |
| A6000 | 48 GB | Rendering, inference | Hyperstack, Vast.ai, Massed Compute |
| A40 | 48 GB | Inference & visualization | Vultr |
| A16 | 64 GB | Virtualized low-VRAM workloads | Vultr |
| V100 | 16/32 GB | Legacy training & inference | Akash, Vast.ai |
| PRO6000 | 96 GB | Professional workloads | Massed Compute |
For real-time availability, see the Aquanode Marketplace.
Supported GPU Providers
Aquanode integrates multiple providers to ensure reliable, cost-efficient, and flexible GPU compute. Provider selection is handled automatically based on availability, pricing, and performance requirements.
Pricing
GPU pricing models and on-demand rates across all Aquanode providers