<img alt="" src="https://secure.insightful-enterprise-intelligence.com/783141.png" style="display:none;">

NVIDIA H100 SXMs On-Demand at $3.00/hour - Reserve from just $2.10/hour. Reserve here

Deploy 8 to 16,384 NVIDIA H100 SXM GPUs on the AI Supercloud. Learn More

|

Published on 10 Jan 2025

Step-by-Step Guide to Running Meta Llama 3.1 405B

TABLE OF CONTENTS

updated

Updated: 10 Jan 2025

NVIDIA A100 GPUs On-Demand

Sign up/Login

In 2024, Meta released Llama 3.1 405B as a groundbreaking open-source AI model leading the way in innovation. The 405B model offers superior flexibility, control and cutting-edge features so developers can explore advanced workflows like easy-to-use synthetic data generation, follow turnkey directions for model distillation and enable seamless RAG operations. If you are planning to deploy the Llama 3.1 405B model but are unsure how to start, check out our latest tutorial below.

In our tutorial, we provide a step-by-step guide to deploying the billion-parameter Llama 3.1 model. 

What is Llama 3.1 405B?

Llama 3.1 405B is Meta's most advanced open-source large language model, featuring 405 billion parameters. It excels in multilingual dialogue, outperforming numerous industry benchmarks for both closed and open-source conversational AI models. The model supports multiple languages, enhancing its applicability across diverse linguistic contexts. It can process up to 128,000 tokens, so it handles extensive textual data and maintains coherence over long passages. 

Llama 3.1 405B Features

The Llama 3.1 405B comes with new capabilities, including:

  • Multilingual Support: Llama 3.1 405B supports multiple languages, including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, enhancing its applicability across diverse linguistic contexts.

  • Extended Context Length: The model can process up to 128,000 tokens, enabling it to handle extensive textual data and maintain coherence over long passages.

  • Tool Usage Capabilities: Llama 3.1 405B is designed to utilise external tools, expanding its functionality beyond text generation.

  • Open Source Accessibility: As an open-source model, Llama 3.1 405B is accessible for research and development, promoting transparency and innovation in AI applications.

  • Synthetic Data Generation: Generates synthetic data to address privacy and data scarcity challenges.

Steps to Deploy Llama 3.1 405B

Now, let's walk through the step-by-step process of deploying Llama 3.1 405B on Hyperstack.

Step 1: Accessing Hyperstack

  • Go to the Hyperstack website and log in to your account.
  • If you're new to Hyperstack, you'll need to create an account and set up your billing information. Check our documentation to get started with Hyperstack.
  • Once logged in, you'll be greeted by the Hyperstack dashboard, which provides an overview of your resources and deployments.

Step 2: Deploying a New Virtual Machine

Initiate Deployment

  • Look for the "Deploy New Virtual Machine" button on the dashboard.
  • Click it to start the deployment process.

Select Hardware Configuration


 

Choose the Operating System

  • Select the "Ubuntu Server 22.04 LTS R535 CUDA 12.4 with Docker". 

Select a keypair

  • Select one of the keypairs in your account. Don't have a keypair yet? See our Getting Started tutorial for creating one.

Network Configuration

  • Ensure you assign a Public IP to your Virtual machine [See the attached screenshot].
  • This allows you to access your VM from the internet, which is crucial for remote management and API access.

Enable SSH Access

  • Make sure to enable an SSH connection.
  • You'll need this to securely connect and manage your VM.

Configure Additional Settings

Please note: this cloud-init script will only enable the API  once for demo-ing purposes. For production environments, consider using secure connections, secret management, and monitoring for your API.

Review and Deploy

  • Double-check all your settings.
  • Click the "Deploy" button to launch your virtual machine.

Step 3: Initialisation and Setup

After deploying your VM, the cloud-init script will begin its work. This process typically takes about 20 minutes. During this time, the script performs several crucial tasks:

  • Dependencies Installation: Installs all necessary libraries and tools required to run Llama 3.1 405B.
  • Model Download: Fetches the Llama 3.1 405B model files from the specified repository.

While waiting, you can prepare your local environment for SSH access and familiarise yourself with the Hyperstack dashboard.

Step 4: Accessing Your VM

Once the initialisation is complete, you can access your VM:

Locate SSH Details

  • In the Hyperstack dashboard, find your VM's details.
  • Look for the public IP address, which you will need to connect to your VM with SSH.

Connect via SSH

  • Open a terminal on your local machine.
  • Use the command ssh -i [path_to_ssh_key] [os_username]@[vm_ip_address] (e.g: ssh -i /users/username/downloads/keypair_hyperstack ubuntu@0.0.0.0.0)
  • Replace username and ip_address with the details provided by Hyperstack.

Interacting with Llama 3.1 405B

To access and experiment with Meta's latest model, SSH into your machine after completing the setup. If you are having trouble connecting with SSH, watch our recent platform tour video (at 4:08) for a demo. Once connected, use this API call on your machine to start using the Llama 3.1 405B: 

MODEL_NAME="meta-llama/Meta-Llama-3.1-405B-Instruct-FP8"
curl -X POST http://localhost:8000/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "'$MODEL_NAME'",
        "messages": [
            {
                "role": "user",
                "content": "Hello, how are you?"
            }
        ]
    }'

IMPORTANT: We are deploying the quantised FP8 model version to enable it to fit within a single node.

If the API is not working after ~10 minutes, please refer to our 'Troubleshooting Llama 3.1 405B section below.

Troubleshooting Llama 3.1 405B

Step 5: Hibernating Your VM

When you're finished with your current workload, you can hibernate your VM to avoid incurring unnecessary costs:

  • In the Hyperstack dashboard, locate your Virtual machine.
  • Look for a "Hibernate" option.
  • Click to hibernate the VM, which will stop billing for compute resources while preserving your setup.

Why Deploy Llama 3.1 405B on Hyperstack?

Hyperstack is a cloud platform designed to accelerate AI and machine learning workloads. Here's why it's an excellent choice for deploying Llama 3.1 405B:

  • Availability: Hyperstack provides access to the latest and most powerful GPUs such as the NVIDIA H100 on-demand, specifically designed to handle large language models. 
  • Ease of Deployment: With pre-configured environments and one-click deployments, setting up complex AI models becomes significantly simpler on our platform. 
  • Scalability: You can easily scale your resources up or down based on your computational needs.
  • Cost-Effectiveness: You pay only for the resources you use with our cost-effective cloud GPU pricing
  • Integration Capabilities: Hyperstack provides easy integration with popular AI frameworks and tools.

Explore our Llama Tutorials Series Below!

Want to get started with other popular Meta Llama models? Check out our comprehensive tutorials below!

FAQs

What is Llama 3.1 405B?

Llama 3.1 405B is Meta's top open-source language model with 405 billion parameters. It excels in multilingual dialogue, surpassing many benchmarks. It supports multiple languages and processes up to 128,000 tokens, handling extensive data and maintaining coherence.

What are Llama 3.1 405B features?

The latest Llama 3.1 405B comes with new capabilities, including:

  • Multilingual Support: Llama 3.1 405B supports languages like English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, broadening its use.

  • Extended Context Length: It processes up to 128,000 tokens, handling large data and maintaining coherence.

  • Tool Usage Capabilities: Designed to use external tools, enhancing functionality.

  • Open Source Accessibility: As open-source, it's available for research, promoting transparency and innovation.

  • Synthetic Data Generation: Creates synthetic data to tackle privacy and scarcity issues.

Can Llama 3.1 405B process long texts?

Yes, Llama 3.1 405B supports an expanded context of up to 128k tokens, making it capable of handling larger datasets and documents.

How do I deploy Llama 3.1 405B on Hyperstack?

You can deploy Llama 3.1 405B by launching a virtual machine with an NVIDIA A100 GPU, configuring the environment, and using cloud-init scripts for setup.

Why should I deploy Llama 3.1 405B on Hyperstack?

Hyperstack provides access to powerful GPUs like the NVIDIA A100, easy deployment, scalability, and cost-effective GPU pricing, making it ideal for running Llama 3.1 405B.

Subscribe to Hyperstack!

Enter your email to get updates to your inbox every week

Get Started

Ready to build the next big thing in AI?

Sign up now
Talk to an expert

Share On Social Media

18 Dec 2024

Meta has surprisingly released Llama 3.3, marking a major leap in open-source AI. Llama ...

29 Nov 2024

The Hyperstack LLM Inference Toolkit is an open-source tool designed to simplify the ...