<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

NVIDIA H100 SXMs On-Demand at $3.00/hour - Reserve from just $2.10/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

NVIDIA H100 SXM

A Supercloud Specialised for AI

Introducing the most advanced AI cluster of its kind: Hyperstack’s NVIDIA H100 SXM is built on custom DGX reference architecture. Deploy from 8 to 16,384 cards in a single cluster - only available through Hyperstack's Supercloud reservation.

hgx-h100-sxm
the-largest-single-cluster-of-h100s

The Largest Single
Clusters of NVIDIA H100s

We ensure a surplus of power to run even the most demanding workloads. With up to 16,384 H100 80G cards operating in a single cluster, there are no break points, allowing for multitenant capabilities.

built-with-nvidia-dgx-protocols

Built with NVIDIA
DGX Protocols

Built on NVIDIA DGX reference architecture, the NVIDIA H100 SXM model integrates seamlessly into the DGX ecosystem, providing a comprehensive solution for enterprise-level AI development and applications.

unmatched-network-connectivity

Unmatched Network Connectivity

While most platforms boast “fast” connectivity, typically ranging from 200gibps to 800gibps, Hyperstack's Supercloud operates at a staggering 3.2tibps (3,200gibps), offering a significant performance enhancement over traditional platforms.

NVIDIA H100 SXM

Pricing starts from  $2.10/hour

hgx-h100-sxm-reserve-now
hgx-h100-sxm-customised-scalable-service-delivery

Unrivalled Performance in…

tick_check

AI Training & Inference

30x faster inference speed and 9x faster training speed.*

tick_check

LLM Performance

30x faster processing*, enhancing language model performance.

tick_check

Single Cluster Scalability

The AI Supercloud environment is the largest single cluster of H100s available outside of Public Cloud.

tick_check

Connectivity

Specifically designed to allow all nodes to utilise fully their CPU, GPU and Network capabilities without limits.

Customised & Scalable Service Delivery

tick_check

Bespoke Solutions for Diverse Needs

Every business is unique, and at Hyperstack, we tailor our service delivery to match your specific requirements. We personally onboard you to the Supercloud, understanding your unique challenges and objectives, ensuring a solution that aligns perfectly with your business goals.

tick_check

Scalability at Your Fingertips

Flexibility and scalability are the cornerstones of our service delivery model. Built in clusters of 16,384 cards, you will not find a service outside of public cloud that can deliver the same scale and performance that we offer for AI workloads.

SuperCloud Storage

The WEKA® Data Platform offers a comprehensive and integrated data management solution tailored for Supercloud environments. This platform is designed to support dynamic data pipelines, providing high-performance data st orage and processing capabilities that cater to every phase of the data lifecycle, including data ingestion, cleansing, modelling, training validation, and inference.

scalable-performance-1

Scalable Performance

The WEKA Data Platform is designed to deliver high I/O, low latency, and support for small files and mixed workloads, which is crucial for the performance-intensive nature of AI applications.

Organisations can independently and linearly scale their compute and storage in the cloud, efficiently handling tens of millions or even billions of files of all data types and sizes. Certified for NVIDIA DGX SuperPOD.

simplified-data-management-1

Simplified Data Management

The platform is multi-tenant, multi-workload, multi-performant, and multi-location, with a common management interface, which simplifies data management of complex AI pipelines across management interface, which simplifies data management of complex AI pipelines across various environments.

secure-compliant-1

Secure & Compliant

WEKA provides encryption for data in-flight and at-rest, ensuring compliance and governance for sensitive AI data.

energy-efficient-1

Energy Efficient

The platform lowers energy consumption and reduces carbon emissions by cutting data pipeline idle time, extending the usable life of hardware, and facilitating workload migration to the cloud. The WEKA Data Platform can be utilised as part of Supercloud environments or as an additional option for the standard cloud offering through Hyperstack, providing shared storage NAS and Object storage with support for snapshots and cloning.

Benefits of NVIDIA H100 SXM

SXM5 Form Factor

SXM Form Factor

High-density GPU configurations, efficient cooling, and energy optimisation with the superior SXM form factor

DGX reference architecture

DGX reference architecture

Designed with DGX reference architecture to meet the rigorous demands of enterprise-level AI and Machine Learning applications.

Scalable Design

Scalable Design

Modular architecture for seamless scalability to meet evolving computational needs, built in single clusters of 5120 H100 cards.

Enhanced Connectivity

TDP of 700 W

Designed to operate at a higher TDP compared to the PCIe version, the SXM H100 is ideal for the most intensive AI and HPC applications that demand peak computational power.

NVLInk

NVLink & NVSwitch

The HGX SXM5 H100 utilises NVLink and NVSwitch technologies, providing significantly higher interconnect bandwidth compared to our PCIe version. 

GPUDirect Technology

GPU Direct Technology

Enhanced data movement and improved performance: read and write to/from GPU memory, eliminating unnecessary memory copies, decreasing CPU overheads and reducing latency.

NVIDIA H100 SXM

Up to 8 Weeks Delivery Time For Up to 16,384 NVIDIA H100 SXM Card Cluster!

hgx-h100-sxm5-card-cluster

Technical Specifications

Specification
NVIDIA H100 SXM
Form factor
8x NVIDIA H100 SXM
FP64
34 teraFLOPS
FP64 Tensor Core
67 teraFLOPS
FP32
67 teraFLOPS
TF32 Tensor Core
989 teraFLOPS
BFLOAT16 Tensor Core
BFLOAT16 Tensor Core
FP16 Tensor Core
1,979 teraFLOPS
FP8 Tensor Core
3,958 teraFLOPS
INT8 Tensor Core
3,958 TOPS
GPU Memory
80GB
GPU Memory Bandwidth
3.35TB/s
Connectivity
3.2 tibps
Decoders
7 NVDEC / 7 JPEG
Multi-instance GPUs
Up to 7 MIGs @10GB each
Max thermal design power (TDP)
Up to 700W (configurable)
Interconnect
NVLink: 900GB/s PCIe / Gen5: 128GB/s

Only available through Supercloud reservation

nvidia-h100-sxm-supercloud-reservation

Frequently Asked Questions

We have a dedicated technical team dedicated to onboarding NVIDIA H100 SXM users, ready to help you set up a Supercloud environment to any configuration you require.

What is the difference between NVIDIA's H100 SXM and PCIe models?

Both NVIDIA's H100 SXM and PCIe models are powerful GPUs, but they differ in connectivity and performance. PCIe is flexible and comparatively lower at cost, but SXM has double the memory, faster data transfer and a higher power limit for extreme performance.

What is NVIDIA H100 SXM?

NVIDIA's H100 SXM is a strong GPU designed for data centres, optimised for demanding workloads like AI, scientific simulations and big data analytics.

How much faster is NVIDIA H100 than NVIDIA A100?

The NVIDIA H100 offers 30x faster inference speed and 9x faster training speed than the A100.

What are the NVIDIA H100 SXM specs?

The key features of  NVIDIA H100 SXM include

  • SXM Form Factor: It is designed with high-density GPU configurations, efficient cooling, and energy optimisation, making it suitable for demanding applications.
  • DGX Reference Architecture: Integrated with the DGX reference architecture, it meets the rigorous demands of enterprise-level AI and machine learning applications.
  • Scalable Design: With a modular architecture, it allows for seamless scalability to meet evolving computational needs. It can be built into single clusters of more that 16K NVIDIA H100 cards.
  • GPUDirect Technology: This technology enhances data movement and improves performance by enabling direct reading and writing to/from GPU memory, reducing CPU overheads and latency.
  • NVLink & NVSwitch: Utilising NVLink and NVSwitch technologies, provides significantly higher interconnect bandwidth compared to PCIe versions, enhancing overall performance.
  • TDP of 700W: The NVIDIA H100 SXM is designed to operate at a higher Thermal Design Power (TDP) compared to PCIe versions, making it ideal for the most intensive AI and high-performance computing (HPC) applications that demand peak computational power.

What is the TDP of NVIDIA H100 SXM?

The NVIDIA H100 SXM power is up to 700W.

What is the NVIDIA H100 SXM price?

The NVIDIA H100 SXM price for rent starts at $ 3.75 per hour on Hyperstack. For long-term use, you can reserve the NVIDIA H100 SXM starting  from  $2.10/hour.