- Deploy and run secure bare-metal GPU, general compute and storage as dedicated machines or entire AI factories
- Cluster auto-setup running production kubernetes, k3s, nomad and other
- Run serverless inference serving open source models with a OpenAI compatible dedicated endpoint
- Manage trough dashboard, developer friendly API or CLI