<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

NVIDIA H100 SXMs On-Demand at $2.40/hour - Reserve from just $1.90/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

A6000
nvidia-h200-sxm-product

Build and Innovate with 
NVIDIA H200 SXM

Designed to accelerate modern AI workloads and boost HPC performance with larger memory, the NVIDIA H200 SXM is now available on-demand on Hyperstack. 

product-banner-curve

Unrivalled
Performance in..

tick_check

Generative AI

Up to 2x the speed of the NVIDIA H100 for model training and inference. 

tick_check

LLM Inference

Delivers 2x the performance of the NVIDIA H100 for LLMs like Llama2 70 B. 

tick_check

High-Performance Computing (HPC) 

Offers 110x faster MILC performance than dual x86 CPUs, perfect for memory-heavy tasks.

tick_check

AI & Machine Learning (ML)

3,958 TFLOPS of FP8 compute supercharge AI model training for lightning-fast results. 

Key Features of
NVIDIA H200 SXM

scalable-configuration

Scalable Configuration

Deploy the NVIDIA H200-141G-SXM5x8 configuration, which combines eight powerful H200 SXM GPUs, ideal for enterprise-scale generative AI, model training and memory-intensive simulations. 

high-speed-networking

High-Speed Networking

Experience high-speed networking of up to 350 Gbps for low-latency and high-throughput workloads. Ideal for ensuring fast data transfer and minimal delays for large-scale workloads. 

massive-ephemeral

Massive Ephemeral
NVMe Storage 

Get 32 TB of ultra-fast ephemeral storage, perfect for high-speed data processing, temporary caching and handling large training datasets during intensive AI and HPC workloads. 

huge-ram-capacity

Huge RAM Capacity 

Enjoy 1920 GB RAM for smooth operation of large AI models, dataset handling and compute-heavy applications without running into memory bottlenecks. 

snapshot-support-for-fast-recovery

Snapshot Support for
Fast Recovery

Take system snapshots to capture your NVIDIA H200 SXM VM’s exact state, including config and bootable volumes. Restore environments quickly and safely for versioning, rollback or disaster recovery. 

bootable-volume

Bootable Volume
for Persistent OS State 

Every NVIDIA H200 SXM VM comes with a dedicated 100 GB bootable volume, holding OS files and key settings to run the VM’s operating system. 

NVIDIA H200 SXM

Starts from $2.45/hour

hpc-social-sc23-h200-1200x628 1

Technical Specifications 

VM:
NVIDIA H200 SXM

Generation:
Latest 2024 Generation N3

Memory:
141GB HBM3e

Flavor Name 
Memory 
GPU Count 
CPU Cores 
CPU Sockets 
RAM (GB)
Root Disk (GB)
Ephemeral Disk (GB)
Region
n3-H200-141G-SXM5x8 
141GB HBM3e 
8
192
2
1920
100
32000
CANADA-1
Flavor Name 
Memory 
GPU Count 
CPU Cores 
CPU Sockets 
RAM (GB)
Root Disk (GB)
Ephemeral Disk (GB)
Region
n3-H200-141G-SXM5x8 
141GB HBM3e 
8
192
2
1920
100
32000
CANADA-1

Frequently Asked Questions

What is the NVIDIA H200 SXM?

The NVIDIA H200 SXM is a high-performance GPU built on the Hopper architecture, designed to accelerate AI, HPC and generative AI workloads. It offers breakthrough performance with advanced memory and bandwidth, ideal for large-scale workloads. 

How to access the NVIDIA H200 SXM on Hyperstack?

Log in to our ultimate cloud GPU platform here to access the NVIDIA H200 SXM on demand. 

How much memory does the NVIDIA H200 SXM have?

The NVIDIA H200 SXM is equipped with a massive 141GB of HBM3e memory, providing exceptional capacity for running memory-intensive AI models and large datasets. 

Is the NVIDIA H200 SXM ideal for generative AI workloads?

Yes, the NVIDIA H200 SXM is highly suited for generative AI tasks. It delivers up to 2x the performance of previous GPUs, making it perfect for accelerating model training and inference, particularly in large-scale applications like GPT and Llama2. 

What is NVIDIA H200 SXM RAM?

The NVIDIA H200 SXM features 1920 GB of RAM.  

What is the TDP of NVIDIA H200 SXM?

The TDP for the NVIDIA H200 SXM is up to 700W. 

Accessible

Affordable

Efficient