Beta Test On-Demand Kubernetes
Apply Now!
We aim to democratise AI by lowering the barriers to containerised solutions. That's why we're building On-Demand Kubernetes, a robust, AI-optimised Kubernetes service designed to simplify and accelerate your AI development.
Be Among The First 10 Users To Experience The Power Of Our On-Demand Kubernetes Today!
Why Apply to be a
Beta Tester?
Becoming a Beta Tester will give you:
Early Access:
Be among the first to experience Hyperstack's On-Demand Kubernetes and have your say on product development. Tell us what suits you, so we can grow together!
Use for Free:
Enjoy complimentary access to the Beta Version of Hyperstack's On-Demand Kubernetes! You only pay for VMs (compute and storage) for Worker nodes.
Fully Optimised for AI:
Our AI-optimised operating system, network, and storage solutions ensure your resources are used efficiently, delivering superior performance. Hyperstack's On-Demand Kubernetes are fully optimised for our internal infrastructure with additional bespoke drivers. Meticulously designed for seamless integration, they deliver top-notch performance for your AI applications.
Cost-Effective Scalability:
Benefit from on-demand scaling capabilities, allowing you to adjust resources in minutes based on your needs, ensuring cost-effectiveness by only paying for what you need.
Intuitive API Integration:
Streamline your workflows with smooth integration of intuitive APIs for automated deployment and management.
Disclaimer
For users who find a managed Kubernetes solution to be more comprehensive than their requirements, we are thrilled to announce the upcoming technical preview of our CaaS (Container as a Service) offering. This alternative option will allow you to effortlessly launch a single container on a virtual machine, providing the simplest and quickest way to deploy your applications.
GPU Pricing
Choose which model to deploy Kubernetes from within our range of cutting-edge NVIDIA GPUs.
GPU Cloud Pricing
We have Data Centres in Europe and North America. Our billing cycles are accurate to the minute, so you only pay for what you use.
GPU Model | VRAM (GB) | Max pCPUs per GPU | Max RAM (GB) per GPU | Pricing Per Hour | Reservation Pricing |
---|---|---|---|---|---|
NVIDIA H100 SXM 80GB | 80 | 24 | 240 | $ 3.00 per Hour | Starts from $2.25/hour |
NVIDIA H100 PCIe NVLink 80GB | 80 | 31 | 180 | $ 1.95 per Hour | Starts from $1.37/hour |
NVIDIA H100 PCIe 80GB | 80 | 28 | 180 | $ 1.90 per Hour | Starts from $1.33/hour |
NVIDIA A100 80GB PCIe NVLink | 80 | 31 | 240 | $ 1.40 per Hour | Starts from $0.98/hour |
NVIDIA A100 PCIe 80GB | 80 | 28 | 120 | $ 1.35 per Hour | Starts from $0.95/hour |
NVIDIA L40 | 48 | 28 | 58 | $ 1.00 per Hour | Starts from $0.70/hour |
NVIDIA RTX A6000/A40 | 48 | 28 | 58 | $ 0.50 per Hour | Starts from $0.35/hour |
GPU Model
VRAM (GB)
Max pCPUs per GPU
Max RAM (GB) per GPU
Pricing Per Hour
Reservation Pricing
GPU Model
VRAM (GB)
Max pCPUs per GPU
Max RAM (GB) per GPU
Pricing Per Hour
Reservation Pricing
GPU Model
VRAM (GB)
Max pCPUs per GPU
Max RAM (GB) per GPU
Pricing Per Hour
Reservation Pricing
GPU Model
VRAM (GB)
Max pCPUs per GPU
Max RAM (GB) per GPU
Pricing Per Hour
Reservation Pricing
GPU Model
VRAM (GB)
Max pCPUs per GPU
Max RAM (GB) per GPU
Pricing Per Hour
Reservation Pricing
GPU Model
VRAM (GB)
Max pCPUs per GPU
Max RAM (GB) per GPU
Pricing Per Hour
Reservation Pricing
GPU Model
VRAM (GB)
Max pCPUs per GPU
Max RAM (GB) per GPU
Pricing Per Hour
Reservation Pricing