TABLE OF CONTENTS
Updated: 16 Dec 2024
NVIDIA H100 SXM On-Demand
Surprise, surprise! 😉It’s that time of the week again, so welcome to the latest edition of your Hyperstack Weekly Rundown. We’re back with a bunch of new updates and exciting tutorials, just for you. Let's get started.
Contract Updates on Hyperstack🌐
Virtual machines (VMs) can now be added directly to your contract. Here's what you get:
-
Greater flexibility and ease of use so no need for support calls.
-
A detailed breakdown of your total consumption on the billing overview screen, showing your total credit balance, on-demand balance, and contract usage.
-
Resource activity and contract pages now include all contract details, like expiration dates, total GPU counts, pricing, and usage.
Please note that contracted user accounts need to be separate from on-demand accounts.
On-Demand NVIDIA H100 SXM- Available at Just $3.00/hr
The NVIDIA H100 SXM is now available on demand. Get access to the NVIDIA H100 SXM for only $3.00 per hour. Designed for peak AI, deep learning and HPC performance, the NVIDIA H100 SXM’s architecture maximises throughput and efficiency, with NVLink providing an impressive 600 GB/s of bidirectional GPU communication.
But that's not all, you also get:
- High-Speed Networking: Enjoy high-speed networking of up to 350 Gbps for faster AI training.
- Massive Storage Options: Choose from persistent NVMe for long-term data needs or 32,000 GB of ephemeral storage to drive complex, high-speed operations.
- Flexible Configuration: 8 GPUs, 192 CPU cores, 1800 GB RAM—ready for distributed, compute-heavy applications.
Get Access to the NVIDIA H100 SXM Now on Hyperstack
New in Our Blog 📝
Ready to deploy the latest LLM models on Hyperstack? Check out our latest tutorials to get started:
Deploying and Using Granite 3.0 8B on Hyperstack:
A Quick Start Guide
IBM has just released its latest LLM model Granite 3.0 8B. This dense decoder-only model has been trained on more than 12 trillion tokens and rivals advanced LLMs from Meta and Mistral AI on Hugging Face’s OpenLLM Leaderboard. To get started, check out our full tutorial here.
Deploying and Using Llama-3.1 Nemotron 70B on Hyperstack:
A Quick Start Guide
Remember when we teased you about this tutorial? Well, the wait is finally over. This guide is your ticket to deploying the Llama-3.1 Nemotron 70B like a pro. With high-end Hyperstack GPUs, you can experiment with this fine-tuned Llama 3.1 model quickly. To get started, check out our full tutorial here.
How to Reduce AI Compute Costs with Hyperstack:
5 Most Effective Ways
Fine-tuning existing models or training large foundational models for specific use cases could be affordable but still require considerable resources. Want to keep those AI computing costs from getting out of control? Check out our latest blog for full details.
Hear It from Our Happy Customers 💬
Don’t just take our word for it. Here’s what Rexa had to say about their experience with Hyperstack:
Get Featured on Hyperstack with Your Success Story
That's a wrap for this week. Catch you next time with more exciting updates from Hyperstack. Don’t forget to subscribe to our newsletter below for exclusive insights and the latest scoop on AI and GPUs- delivered right to your inbox!
Missed the Previous Editions?
Catch up on everything you need to know from Hyperstack Weekly below:
👉 Hyperstack Weekly Rundown #5
Subscribe to Hyperstack!
Enter your email to get updates to your inbox every week
Get Started
Ready to build the next big thing in AI?