
Build and Innovate with
NVIDIA H200 SXM
Designed to accelerate modern AI workloads and boost HPC performance with larger memory, the NVIDIA H200 SXM is now available on-demand on Hyperstack.

Unrivalled
Performance in..
Generative AI
Up to 2x the speed of the NVIDIA H100 for model training and inference.
LLM Inference
Delivers 2x the performance of the NVIDIA H100 for LLMs like Llama2 70 B.
High-Performance Computing (HPC)
Offers 110x faster MILC performance than dual x86 CPUs, perfect for memory-heavy tasks.
AI & Machine Learning (ML)
3,958 TFLOPS of FP8 compute supercharge AI model training for lightning-fast results.
Key Features of
NVIDIA H200 SXM
Scalable Configuration
Deploy the NVIDIA H200-141G-SXM5x8 configuration, which combines eight powerful H200 SXM GPUs, ideal for enterprise-scale generative AI, model training and memory-intensive simulations.
High-Speed Networking
Experience high-speed networking of up to 350 Gbps for low-latency and high-throughput workloads. Ideal for ensuring fast data transfer and minimal delays for large-scale workloads.
Massive Ephemeral
NVMe Storage
Get 32 TB of ultra-fast ephemeral storage, perfect for high-speed data processing, temporary caching and handling large training datasets during intensive AI and HPC workloads.
Huge RAM Capacity
Enjoy 1920 GB RAM for smooth operation of large AI models, dataset handling and compute-heavy applications without running into memory bottlenecks.
Snapshot Support for
Fast Recovery
Take system snapshots to capture your NVIDIA H200 SXM VM’s exact state, including config and bootable volumes. Restore environments quickly and safely for versioning, rollback or disaster recovery.
Bootable Volume
for Persistent OS State
Every NVIDIA H200 SXM VM comes with a dedicated 100 GB bootable volume, holding OS files and key settings to run the VM’s operating system.
NVIDIA H200 SXM
Starts from $2.45/hour

Technical Specifications
VM: NVIDIA H200 SXM
Generation: Latest 2024 Generation N3
Memory: 141GB HBM3e
Frequently Asked Questions
What is the NVIDIA H200 SXM?
The NVIDIA H200 SXM is a high-performance GPU built on the Hopper architecture, designed to accelerate AI, HPC and generative AI workloads. It offers breakthrough performance with advanced memory and bandwidth, ideal for large-scale workloads.
How to access the NVIDIA H200 SXM on Hyperstack?
Log in to our ultimate cloud GPU platform here to access the NVIDIA H200 SXM on demand.
How much memory does the NVIDIA H200 SXM have?
The NVIDIA H200 SXM is equipped with a massive 141GB of HBM3e memory, providing exceptional capacity for running memory-intensive AI models and large datasets.
Is the NVIDIA H200 SXM ideal for generative AI workloads?
Yes, the NVIDIA H200 SXM is highly suited for generative AI tasks. It delivers up to 2x the performance of previous GPUs, making it perfect for accelerating model training and inference, particularly in large-scale applications like GPT and Llama2.
What is NVIDIA H200 SXM RAM?
The NVIDIA H200 SXM features 1920 GB of RAM.
What is the TDP of NVIDIA H200 SXM?
The TDP for the NVIDIA H200 SXM is up to 700W.