<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

NVIDIA H100 SXMs On-Demand at $2.40/hour - Reserve from just $1.90/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

|

Published on 10 Mar 2025

Hyperstack Weekly Rundown 23: Latest Edition

TABLE OF CONTENTS

updated

Updated: 10 Mar 2025

NVIDIA H100 SXM On-Demand

Sign up/Login
summary

We generated an adorable cat 🐱 with the latest Wan 2.1 model and it looked so real you’d think it was filmed. Want to know how we did it? Read our full newsletter to explore our latest tutorial on video generation with Wan 2.1. But that’s not all—we have the latest updates, pricing drops and powerful new features on Hyperstack. There's plenty to explore, let's get started.

New On-Demand Pricing for NVIDIA H100 SXM

We’ve got exciting news for all Hyperstack users!

We have dropped the on-demand price of our NVIDIA H100 SXM 8x configuration to just $2.40 per hour (that is $19.20). Now, you can tap into the full power of our NVIDIA H100 SXM GPUs without breaking the bank.  

At Hyperstack, we understand that training and fine-tuning AI models at scale can quickly rack up costs. Every compute hour counts, and so does every dollar. That’s why we’ve rolled out this new pricing for our NVIDIA H100 SXM, giving you the power to deploy cutting-edge models, iterate quickly, and launch market-ready solutions affordably.

Whether you’re scaling up a startup, experimenting with the latest models or optimising enterprise workloads, this is your shot to get top-tier performance at a lower cost.

Launch Your NVIDIA H100 SXM

Spin Up your NVIDIA H100 SXM in minutes and start building with the best at an affordable price. 

New Hyperstack Features and Updates

Here's a sneak peek at what's new on Hyperstack:

Sustainability Icon

We’re making it easier for you to identify and choose eco-friendly regions. A green leaf icon now marks regions powered by 100% sustainable energy, so you can make informed decisions when deploying your workloads. Any Environments created in these regions will also carry this label.

Documentation Updates

We've updated our VM features and restrictions documentation to give you clearer insights into when Snapshot or Hibernation actions may be temporarily unavailable. This ensures you can plan your workflows more effectively.

Enhanced Performance & Reliability

We’ve enhanced the reliability of VMs, Snapshots, and Volumes to ensure a more stable and seamless cloud experience.

New in Our Blog

Check out our latest blogs and tutorials on Hyperstack:

How to Deploy QwQ 32B on Hyperstack:

A Comprehensive Guide

We explored the latest QwQ 32B on Hyperstack. QwQ 32B, Alibaba’s powerful 32.5 billion-parameter model, excels in math, coding, and logical problem-solving with an impressive 131,072-token context length. Whether you're tackling complex calculations, generating optimised code, or building AI-powered applications, this guide will help you get started seamlessly. Check out the full tutorial here!

QwQ 32B - Blog post -  1200x620

How to Generate Videos with Wan 2.1:

A Comprehensive Guide

We explored Wan 2.1, Alibaba’s latest open-source AI model for text-to-video generation and generated a cutie on Hyperstack. With advanced multimodal support, it allows users to create and edit videos using text, images, and video references. The model excels in video realism, dynamic scene generation and fine-grained editing, ranking highly on VBench for its performance. Check out the full tutorial here!

How to Generate Videos with Wan 2.1 on Hyperstack - Blog post -  1200x620

5 LLM Inference Techniques to Reduce Latency and Boost Performance:

Most-Effective Techniques

One of the biggest headaches with LLMs is the slow, compute-heavy inference process especially when every millisecond counts. But here’s the good news: with the right optimisation techniques, you can reduce latency, boost performance and make your LLM run efficiently at scale. Our latest article explores 5 LLM Inference techniques to reduce latency and optimise your model performance. Check out the full blog here!

Blog_5 LLM Inference Techniques (1)

LoRA for Stable Diffusion Fine-Tuning:

Understand Why It's Efficient 

Stable Diffusion is a popular text-to-image model that generates images from text prompts. Fine-tuning it traditionally requires significant computational power, but LoRA offers a solution by reducing the parameters needed for training. Our latest blog will explore why LoRA is ideal for efficient fine-tuning stable diffusion models. Check out the full blog here!

LoRA for Stable Diffusion Fine-Tuning - Blog post -  1200x620

Meet Us at NVIDIA GTC 2025

We’re heading to NVIDIA GTC 2025 this coming week and couldn’t be more excited. After an amazing time last year, we’re back to connect with industry leaders, innovators and AI pioneers once again.

Frame 3 (2)

If you’re attending, let’s connect. Book a meeting with us here to get more details on where to find us- we’d love to meet you. 

What's Coming Next?

We will be introducing exciting features and updates soon. Get a sneak peek below:

  •  Kubernetes Enhancement: We will soon introduce manual scaling for worker nodes in On-Demand Kubernetes clusters. Also, get ready for increased performance and stability for a more reliable and optimised experience. 

Have you explored our on-demand Kubernetes clusters yet? Check out our guide full guide here!

Get Featured on Hyperstack

success_shared_2

We’re eager to hear all about your Hyperstack journey! Share your success story with us and you might see your story in our upcoming weekly newsletter.


That’s a wrap for this edition of the Hyperstack Weekly Rundown! Stay tuned for more updates, new features and announcements next week. Until then, don’t forget to subscribe to our newsletter below for exclusive insights and the latest scoop on AI and GPUs- delivered right to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack Weekly below:

👉 Hyperstack Weekly Rundown #21

👉 Hyperstack Weekly Rundown #22

Subscribe to Hyperstack!

Enter your email to get updates to your inbox every week

Get Started

Ready to build the next big thing in AI?

Sign up now
Talk to an expert

Share On Social Media

21 Feb 2025

New User Experience with Login & Registration Flow Your experience is our top ...

14 Feb 2025

Update on the New Login and Registration UI We're adding the final touches to make your ...