<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

Access NVIDIA H100s from just $2.06/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

GPU Selector For LLMs

Find the ideal GPU for your LLM needs with our easy-to-use GPU selector tool. Be it fine-tuning or running inference, we’ll help you choose the right hardware for your project.

Ready to Find Your GPU?

How to Use the GPU Selector

Group 41959

Step

01

Choose Your Model

Select from our list of popular LLMs or enter any HuggingFace model name.

Step

02

Explore Training Options

View memory requirements for various training approaches:

  • Full fine-tuning
  • LoRa fine-tuning
  • And others

Step

03

Check Inference Requirements

See memory needs for different precision levels:

  • Float32
  • Float16
  • Int8
  • And others

Step

04

Get GPU Recommendations

Based on your use case, we'll suggest the optimal GPU available on Hyperstack.

Step

05

Start Your Project

Click through to Hyperstack and begin working on your LLM project immediately.

Why Do You Need It?

precision-matters

Precision Matters:

We account for higher-precision tasks requiring more powerful GPUs.

training-vs-inference

Training vs. Inference

Our recommendations consider that training typically needs more robust GPUs than inference.

training-vs-inference

Tailored for You:

We provide personalised suggestions based on your LLM and use case.