ComfyUI
Deploy ComfyUI, a powerful node-based interface for Stable Diffusion, on Aquanode's optimized infrastructure with GPU acceleration.
Access ComfyUI Deployment
Navigate to Serverless ComfyUI.
Review Configuration
You'll see a page displaying the basic configuration for ComfyUI. These configurations are pre-set and cannot be edited, but show you what ports are available:
- VS Code UI Port: Access to integrated development environment
- ComfyUI App Port: Main ComfyUI running port
Continue to Deploy
Click "Continue to Deploy" to proceed with the deployment process.
GPU Selection
You'll be redirected to a GPU selection page where you can choose from a wide range of GPU providers and configurations. Each provider offers different GPU types with specific configurations optimized for ComfyUI workflows.
Browse the GPU gallery and select the GPU that best matches your performance and budget requirements.
Configuration Sheet
A configuration sheet opens as a side panel containing all necessary settings to customize your ComfyUI deployment.
Configure Your ComfyUI
A. Port Configuration
Configure network access for your ComfyUI instance:
- Primary Port: Main port for ComfyUI web interface
- Additional Ports: Secondary ports for VS Code and other services
B. Machine Configuration
- GPU Configuration
- CPU & Memory
- Storage Configuration
Aquanode provides optimized default configurations for ComfyUI based on performance testing, but certain settings can be customized depending upon selected provider.
Deploy ComfyUI
Review your configuration summary including:
- Resource Allocation: Total GPU, CPU, memory, and storage
- Estimated Costs: Monthly cost projection
Click the "Deploy" button to initiate deployment.
Hosting a Hugging Face Model
Aquanode simplifies the deployment and serving of Hugging Face models with the vLLM inference engine, enabling you to host large language models (LLMs) and text-to-text models with optimized GPU efficiency and lightning-fast API responses.
OSS Models
Deploy Open-Source models from a list of available models with GPU acceleration for inference.