Aquanode LogoAquanode Docs

Supported GPU Providers

Aquanode integrates multiple providers to ensure reliable, cost-efficient, and flexible GPU compute. Provider selection is handled automatically based on availability, pricing, and performance requirements.

Providers

Voltage Park

  • GPUs: H100, H200, A100, RTX 4090
  • Regions: US East, US West, EU Central
  • Focus: Enterprise-grade infrastructure with high availability and performance

DataCrunch

  • GPUs: H100, A100, RTX 4090, RTX 3090
  • Regions: Netherlands, Finland, Canada
  • Focus: Cost-effective GPU compute with flexible pricing

Akash Network

  • GPUs: A100, RTX 4090, RTX 3090, V100
  • Regions: Global, decentralized network
  • Focus: Open, peer-to-peer decentralized cloud computing

Available GPUs

GPU ModelMemoryBest For
H10080GB HBM3Large language models, distributed training
H200141GB HBM3eMemory-intensive AI/ML workloads
A10040GB / 80GB HBM2eML training and inference
RTX 409024GB GDDR6XDevelopment, inference, small models
RTX 309024GB GDDR6XCost-effective prototyping and testing
V10016GB / 32GB HBM2Legacy ML workloads and compatibility

Note

  • Providers may update their hardware offerings over time.
  • GPU allocation is optimized by Aquanode to balance cost, performance, and availability.