<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

Access NVIDIA H100s from just $2.06/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

|

Published on 3 Jul 2024

Introducing the NVIDIA DGX GH200 Grace Hopper Superchip to Hyperstack

TABLE OF CONTENTS

updated

Updated: 29 Jul 2024

NVIDIA H100 GPUs On-Demand

Sign up/Login

We're excited to announce that Hyperstack will soon be offering the revolutionary NVIDIA DGX GH200 Grace Hopper Superchip by request. This addition to our high-performance computing solutions can easily tackle massive AI and HPC workloads with unprecedented efficiency.

About NVIDIA DGX GH200

The NVIDIA DGX GH200 is a new class of AI supercomputer. This system is perfect for LLM and most demanding multimodality workloads. By combining the Grace CPU and Hopper GPU into a single and tightly integrated package, NVIDIA has created a CPU+GPU superchip that delivers exceptional performance and efficiency.

Key Features of NVIDIA DGX GH200 

The key features of our latest offering i.e. NVIDIA DGX GH200 include:

  • Integrated Design: The NVIDIA DGX GH200 merges CPU and GPU into a unified unit. This optimises communication and minimises latency.
  • NVLink-C2C Connection: With an impressive 900 GB/s of bidirectional bandwidth, the NVLink-C2C connection between CPU and GPU far outpaces traditional PCIe-based setups.
  • Expansive Memory: Boasting 480GB of LPDDR5X CPU memory and 96GB of HBM3 GPU memory, the NVIDIA DGX GH200 provides ample space for even the largest models and datasets.
  • Unmatched Scalability: Designed for multi-GPU configurations, systems like the NVIDIA DGX GH200 can interconnect up to 256 GH200 superchips via a high-bandwidth NVLink network.
  • AI-Optimised Architecture: Features like an enhanced Transformer Engine boost performance for large language models and other AI workloads.

Performance in Action: OpenSora Benchmarks  

To showcase the NVIDIA DGX GH200's capabilities, we conducted benchmarks using OpenSora, a state-of-the-art text-to-video generation model. We used NVIDIA H100-80GB-SXM5, CUDA 12.2, Nvidia 535.183.01, Linux 6.5.0-41-generic vs GH200, CUDA 12.3, Nvidia 535.183.01, Linux 6.5.0-1021-nvidia-64k with default settings:

Info 9

Performance Comparisons 

Here's how the NVIDIA DGX GH200 performed compared to our NVIDIA H100-80GB-SXM5 offering: 

HS G200vsH100 Sora-01 (3)

HS G200vsH100 Sora-02 (2)

Ideal Use Cases of NVIDIA DGX GH200 

The NVIDIA DGX GH200's unique architecture makes it ideal for a wide range of cutting-edge applications:

  • Large Language Models: You can train and run inference on massive language models with ease, thanks to the improved Transformer Engine and vast memory capacity.
  • Multimodal AI: You get to efficiently process multiple data types (text, image, video, audio) in complex AI systems.
  • Complex Simulations: Your HPC workloads requiring frequent CPU-GPU communication will see significant performance gains.
  • Video Generation and Processing: As demonstrated by our OpenSora benchmarks, the NVIDIA DGX GH200 excels in computationally intensive video tasks.
  • Large-Scale Distributed Training: The GH200's scalability makes it the perfect choice for large scale AI model training across multiple nodes.

Hyperstack aims to provide access to latest high-performance computing solutions such as the NVIDIA DGX GH200 Grace Hopper Superchip. You can easily tackle complex scientific simulations or process massive multimodal datasets with the NVIDIA DGX GH200’s computational power.

Accelerate your AI Workloads with Hyperstack's High-end GPUs. Sign up now!

Get Started

Ready to build the next big thing in AI?

Sign up now
Talk to an expert

Share On Social Media

Hyperstack - Performance Benchmarks link

2 Jul 2024

We're thrilled to announce the upcoming addition of the NVIDIA H100-80GB-SXM5 GPU to our ...

Hyperstack - Performance Benchmarks link

1 Jul 2024

Welcome back to our SR-IOV series! In our previous post, we promised to provide the ...