| GPU Model | Configuration | Price | Memory | Key Features |
|---|---|---|---|---|
| H200 NVL PCI-E | 4x or 8x GPUs | $2.70/hour | - | Single-node with NVLink |
| H100 NVL PCI-E | 2x, 4x, or 8x GPUs | $1.88/hour | - | Single-node with NVLink |
| HGX H200 | 4x or 8x GPUs | $2.63/hour | - | Single and multi-node cluster |
| HGX B200 | TBA | $4.13/hour | - | Blackwell GPU architecture |
| GB200 | 36 Grace CPUs + 72 Blackwell GPUs | $8.00/hour | - | Liquid-cooled rack |
| L40S | 2x, 4x, 8x GPUs | $1.80/hour | 48GB | PCIe configuration |
| RTX PRO 6000 | 1-8x GPUs | $1.24/hour | - | Blackwell Architecture |
| RTX A6000 | 1x or 2x GPUs | $0.60/hour | 48GB | PCIe configuration |
| RTX A5000 | 1x or 2x GPUs | $0.45/hour | 24GB | PCIe configuration |
Pricing is shown as starting rates per hour. Actual pricing may vary based on configuration, region, and availability.
Getting Started
To deploy GPU compute resources:- Choose your GPU configuration based on your workload requirements
- Create a machine using the Airon CLI or API
- Configure your environment with the necessary frameworks and libraries
- Deploy your workload and monitor performance