We’re so close to a milestone, almost at the 10th edition of our Hyperstack Weekly Rundown. But today, let’s celebrate the ninth, packed with exciting updates, the latest tutorials and a glimpse into what's coming next.
Thank you for being part of this journey as we continue sharing insights, releases and tools to boost your Hyperstack experience. Here’s what we’ve lined up for you this week.
Have you started using our Terraform provider yet? If not, it's the perfect time to automate your infrastructure management with Hyperstack. Our Terraform provider, currently in alpha, allows you to easily provision and manage your cloud resources with familiar workflows. Explore our resources by visiting our GitHub repository or checking out the full Terraform provider getting started guide here.
We’ve upgraded our UI to enhance your Hyperstack experience. You can now easily view your contract details directly on Hyperstack. The billing overview screen provides a detailed breakdown of your total consumption, including credit balance, on-demand balance, and contract usage. For quick access, the resource activity and contract pages display all relevant contract information, such as expiration dates, GPU counts, pricing, and usage. See below images for reference:
The screenshots below show the setup details for a virtual machine under a contract on Hyperstack, highlighting the VM's environment, OS image, SSH key and flavour specifications, including the choice of contracted VM i.e. A100-80G-PCIe GPU in this case.
The screenshot below displays the billing section under the prepaid plan of the contract. It shows the current credit balance, which you can top up and other essential plan details and usage costs for the current month of your contract.
The screenshot below displays the resource consumption of your VM on Hyperstack. By filling in the dates of your consumption, you will be able to find the current total cost.
Please note - if your account is associated with a contract, you will need to create a new account in order to spin up non-contracted VMs on-demand.
Here’s a peek at what’s fresh on the Hyperstack blog!
A Comprehensive Guide
With our latest tutorial, learn how to generate high-quality images using Stable Diffusion. Hyperstack offers the perfect infrastructure to support it at scale. This step-by-step guide covers everything from VM setup to configuring the Stable Diffusion Web UI for seamless image generation. To get started, check out our full tutorial here.
A Quick Start Guide
Learn how to transform PDFs into engaging podcasts with Notebook Llama on Hyperstack. Our step-by-step guide walks you through setup, deployment and seamless audio creation. To get started, check out our full tutorial here.
5 Ways to Tackle Challenges on Hyperstack
Struggling to train your foundation LLMs? Check out our latest article to overcome challenges like computational demands, cost management and system reliability with NVIDIA H100 GPU options on Hyperstack. Start optimising your AI models today with our latest blog.
Get NVIDIA H100 SXM on Hyperstack
The NVIDIA H100 SXM delivers unmatched performance for AI workloads with advanced SXM architecture, NVLink scalability and high-speed networking. Ideal for large-scale model training, fine-tuning, and inference, our NVIDIA H100 SXM ensures optimal efficiency. Check out our latest blog for full details.
Latest Features, Updates and What’s Coming Next
Stay ahead with the latest updates from Hyperstack. From the Terraform provider alpha release to the on-demand NVIDIA H100 SXM, we’re constantly innovating to enhance your experience. Check out our monthly update blog for full details.
We’re happy to announce the open-sourcing of the Hyperstack LLM Inference Toolkit. With this toolkit, developers and researchers can efficiently deploy and manage LLMs on Hyperstack. It offers automated model deployment, integrated API management, and real-time tracking for smooth and consistent workflows. Our toolkit will offer flexible deployment options, proxy API integrations and performance monitoring.
Stay tuned to experiment with the Hyperstack LLM Inference Toolkit this week.
Don’t just take our word for it. Here’s what Grzegorz had to say about their experience with Hyperstack:
Get Featured on Hyperstack with Your Success Story
Thank you for joining us on this journey through our latest updates. We’re excited to reach the tenth edition next week and hope you’ll be there to celebrate it with us. Until then, explore this week’s updates and don’t forget to subscribe to our newsletter below for exclusive insights and the latest scoop on AI and GPUs- delivered right to your inbox!
Catch up on everything you need to know from Hyperstack Weekly below:
👉 Hyperstack Weekly Rundown #6