Hyperstack - Product Updates

HYPERSTACK WEEKLY RUNDOWN #7 Welcome to the Seventh Edition

Written by Damanpreet Kaur Vohra | Oct 18, 2024 12:07:45 PM

Guess what? It’s that day of the week again, time for the Hyperstack Weekly! We’ve got some fantastic updates and fresh reads that you won’t want to miss. So, grab your favourite snack (you know you have one 🍔) and let’s explore the latest and greatest from Hyperstack.

New Hyperstack Feature You’ll Love ❤️

We’re constantly working to make your Hyperstack experience even better! Here’s what we rolled out this week:

Environment Filtering on APIs 🌐

Remember when we teased you about this in our previous edition? Well, the wait is over! Filtering by environment is now super easy on several endpoints for Virtual Machines, Firewalls, Volumes, and Billing Usage. This means you can effortlessly manage your resources and access exactly the information you need.

New in Our Blog 📝

Looking for some fresh reads? We’ve got you covered! Check out our latest blogs to enhance your Hyperstack experience:

How to Optimise LLMs on Hyperstack:

5 Best Ways to Boost LLM Efficiency 

Our latest guide explores the best practices for optimising your LLMs on Hyperstack. From selecting the right GPU (Try Our GPU Selector for LLMs Now) to using advanced techniques, this blog is packed with tips to help you maximise performance and efficiency. For full details, read the blog here.

The "Strawberry" Problem with LLMs:

Understanding Why LLMs Misspell Common Words

Have you heard of the famous "strawberry🍓" problem with LLMs that's been taking the internet by storm? This is when some LLMs fail to count how many "r's" are in the word "strawberry." But what could be the reason? Our latest case study explores how to solve the Strawberry problem by fine-tuning your LLM with Hyperstack. For full details, read the blog here.

Llama-3.1-Nemotron-70B-Instruct Tutorial Coming Next Week

Have you heard the buzz? NVIDIA recently released its fine-tuned Llama-3.1-Nemotron-70B-Instruct model, surpassing GPT-4o and Claude 3.5 Sonnet on key benchmarks. Llama-3.1-Nemotron-70B-Instruct has already aced one of those tricky tasks about counting multiple ‘r’s in "strawberry" which some LLMs still manage to fail (as we discussed in our latest case study). Stay tuned as we release a detailed tutorial on deploying and using Llama-3.1-Nemotron-70B-Instruct on Hyperstack. Get ready to take your AI projects to the next level with one of the most advanced models available!

Our LLM London Event Was a Hit

Our recent LLM London meetup at our London office was nothing short of amazing. Stephen Ward and Kevin Wright of NexGen Cloud, along with Chris Parsons, Co-founder & CTO of Cherrypick explored how layering advanced programming concepts over basic prompts can supercharge AI capabilities. We also discovered some surprising GPU benchmarks and a quick sneak peek at our free LLM GPU Selector Tool.

If you missed it, the FOMO is real because this was one for the books. But don’t sweat it, we’ve got plenty of exciting events lined up, so stay tuned for what’s coming next. 

Hear It from Our Happy Customers 💬

Don’t just take our word for it. Here’s what Grzegorz had to say about their experience with Hyperstack:

Get Featured on Hyperstack with Your Success Story

That's a wrap for this week. Catch you next time with more exciting updates from Hyperstack. Don’t forget to subscribe to our newsletter below for exclusive insights and the latest scoop on AI and GPUs- delivered right to your inbox!

Missed the Previous Editions? 

Catch up on everything you need to know from Hyperstack Weekly below:

👉 Hyperstack Weekly Rundown #4

👉 Hyperstack Weekly Rundown #5

👉 Hyperstack Weekly Rundown #6