<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

NVIDIA H100 SXMs On-Demand at $3.00/hour - Reserve from just $2.10/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

|

Published on 15 Nov 2024

HYPERSTACK WEEKLY RUNDOWN #10: Latest Edition

TABLE OF CONTENTS

updated

Updated: 16 Dec 2024

NVIDIA H100 SXM On-Demand

Sign up/Login

Welcome to the 10th edition of the Hyperstack Weekly Rundown! We’re beyond excited to hit this milestone and grateful for all the fantastic feedback we’ve received from you, our readers. Every week we bring you the latest updates, product news and blogs. 

So, thank you for being part of this journey. And now for the big news, we’ve been teasing—our Hyperstack LLM Inference Toolkit is officially live! Keep reading for all the details.

Hyperstack LLM Inference Toolkit Is Live

As we shared in our previous edition, the Hyperstack LLM Inference Toolkit is officially live, right on schedule. This open-source tool simplifies the deployment and management of large language models (LLMs) on Hyperstack, offering you a fast and efficient way to get started. Built as a Python package, it features an intuitive UI, API integrations and comprehensive documentation to guide you through your LLM setup. If you're unsure which GPU suits your LLM workload best, try our GPU Selector for LLMs today!

06-monitoring

With this toolkit, you gain flexibility in choosing specific LLMs for custom use cases, benefit from cost-effective options through smaller models and enjoy enhanced control over API endpoints—all while maintaining data privacy and sovereignty. For more technical details and installation instructions, visit our GitHub page now.

Ready to kick-start your innovative projects on Hyperstack? Check out the tutorial video below to get started with our LLM Inference Toolkit today:

 

New in Our Blog

This week is filled with tutorials and product insights. Here’s a quick look at what’s new:

Deploying and Using Stable Diffusion 3.5 on Hyperstack:

A Quick Start Guide

Stable Diffusion 3.5 was released recently, building on the success of earlier versions with improved image fidelity, faster generation times and enhanced support for diverse artistic styles. Our latest tutorial below explores how to deploy and use Stable Diffusion 3.5 on Hyperstack. To get started, check out our full tutorial here.

final-step

Deploying and Using Qwen 2.5 Coder 32B on Hyperstack:

A Quick Start Guide

Have you tried the latest Qwen 2.5 Coder 32B yet? If not, check out our latest tutorial to deploy the Qwen 2.5 Coder on Hyperstack. The new Qwen model is perfect for projects needing cutting-edge instruction-following capabilities with large language models. To get started, check out our full tutorial here. 

click-on-extensions-on-the-top-and-install

What is GPUaaS (GPU-as-a-Service)?

Here's What You Need to Know

Our latest blog covers all about GPU-as-a-Service, its benefits, ideal use cases and why Hyperstack could be the right cloud partner for your business. Check out our full blog to learn more.

Ready to experience the future of GPU-as-a-Service? Take a quick tour of the Hyperstack below and discover how to get started on our cloud platform. 

 

Meet us at the AI Hackathon

The countdown is on. Our AI Hackathon (AI Video Generation) is happening next week at our London office. This in-person event is your opportunity to collaborate with fellow innovators and compete to create videos using advanced AI tools. Co-hosted by AI London Meetup, Hackathon London Meetup, and NexGen Cloud, it’s the perfect chance to flex your skills and drive innovation in AI video generation.

Spots are limited and time is running out! Secure your place today

Hear It from Our Happy Customers 💬

Hear it from those who’ve partnered with us, our community is always happy with our team. Recently, Daniel shared his outstanding experience with Hyperstack:

HS Testimonials Daniel-06

Be the Next to Share Your Success Story with Hyperstack

Thank you for joining us on this journey and being a part of our 10th weekly rundown. We hope you enjoyed this week’s updates as much as we enjoyed putting them together. Stay tuned for the next edition. Until then, don’t forget to subscribe to our newsletter below for exclusive insights and the latest scoop on AI and GPUs- delivered right to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack Weekly below:

👉 Hyperstack Weekly Rundown #8

👉 Hyperstack Weekly Rundown #9

Subscribe to Hyperstack!

Enter your email to get updates to your inbox every week

Get Started

Ready to build the next big thing in AI?

Sign up now
Talk to an expert

Share On Social Media

20 Dec 2024

Welcome to the last Hyperstack Weekly edition of 2024! As we wrap up an incredible year, ...

16 Dec 2024

Welcome to Hyperstack Weekly! This week's edition will be short and sweet. As we wrap up ...

12 Dec 2024

Hello and welcome to the Hyperstack Weekly! We've got major news for you this week, from ...