What is DeepCoder-14B-Preview?
DeepCoder-14B-Preview is a 14-billion-parameter large language model (LLM) developed by the Agentica team and Together AI for advanced code reasoning tasks. Fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL), it excels at handling long-context code generation. The model achieves a 60.6% Pass@1 accuracy on LiveCodeBench v5, outperforming its base model and matching the performance of OpenAI's o3-mini, despite having fewer parameters. Trained on a diverse dataset of 24,000 problem-test pairs, DeepCoder-14B-Preview is optimised for real-world coding challenges.
Features of DeepCoder-14B-Preview
The features of DeepCoder-14B-Preview include:
- Enhanced Code Reasoning: Fine-tuned for complex code generation and problem-solving tasks.
- Long-Context Handling: Supports context lengths up to 64K tokens, enabling comprehensive code understanding.
- Improved Training Techniques: Utilises GRPO+ and iterative context lengthening for stable and efficient training.
- High Benchmark Performance: Achieves 60.6% Pass@1 on LiveCodeBench v5 and ranks in the 95.3 percentile on Codeforces.
- Versatile Deployment: Compatible with inference systems like vLLM, Hugging Face TGI, SGLang, and TensorRT-LLM.
- Open Source Accessibility: Released under the MIT license, promoting transparency and collaboration.
Steps to Deploy DeepCoder 14B Preview on Hyperstack
Now, let's walk through the step-by-step process of deploying DeepCoder 14B Preview on Hyperstack.
Step 1: Accessing Hyperstack
- Go to the Hyperstack website and log in to your account.
- If you're new to Hyperstack, you'll need to create an account and set up your billing information. Check our documentation to get started with Hyperstack.
- Once logged in, you'll be greeted by the Hyperstack dashboard, which provides an overview of your resources and deployments.
Step 2: Deploying a New Virtual Machine
Initiate Deployment
- Look for the "Deploy New Virtual Machine" button on the dashboard.
- Click it to start the deployment process.
Select Hardware Configuration
- In the hardware options, choose the "1xL40" flavour.
Choose the Operating System
- Select the "Ubuntu Server 22.04 LTS R550 CUDA 12.4 with Docker".
Select a keypair
- Select one of the keypairs in your account. Don't have a keypair yet? See our Getting Started tutorial for creating one.
Network Configuration
- Ensure you assign a Public IP to your Virtual machine.
- This allows you to access your VM from the internet, which is crucial for remote management and API access.
Enable SSH Access
- Make sure to enable an SSH connection.
- You'll need this to connect and manage your VM securely.
Configure Additional Settings
- Look for an "Additional Settings" or "Advanced Options" section.
- Here, you'll find a field for cloud-init scripts. This is where you'll paste the initialisation script. Click here to get the cloud-init script!
To use the DeepCoder 14B Preview, you need to:
- Request access here: https://huggingface.co/agentica-org/DeepCoder-14B-Preview
- Create a HuggingFace token to access the gated model, see more info here.
- Replace line 12 of the attached cloud-init file with your HuggingFace token.
Please note: this cloud-init script will only enable the API once for demo-ing purposes. For production environments, consider using containerization (e.g. Docker), secure connections, secret management, and monitoring for your API.
Review and Deploy
- Double-check all your settings.
- Click the "Deploy" button to launch your virtual machine.
Step 3: Initialisation and Setup
After deploying your VM, the cloud-init script will begin its work. This process typically takes about 5-10 minutes. During this time, the script performs several crucial tasks:
- Dependencies Installation: Installs all necessary libraries and tools required to run DeepCoder-14B-Preview.
- Model Download: Fetches the model files from the specified repository.
While waiting, you can prepare your local environment for SSH access and familiarise yourself with the Hyperstack dashboard.
Step 4: Accessing Your VM
Once the initialisation is complete, you can access your VM:
Locate SSH Details
- In the Hyperstack dashboard, find your VM's details.
- Look for the public IP address, which you will need to connect to your VM with SSH.
Connect via SSH
- Open a terminal on your local machine.
- Use the command ssh -i [path_to_ssh_key] [os_username]@[vm_ip_address] (e.g: ssh -i /users/username/downloads/keypair_hyperstack ubuntu@0.0.0.0.0)
- Replace username and ip_address with the details provided by Hyperstack.
Interacting with DeepCoder 14B Preview
To access and experiment with DeepCoder 14B Preview, SSH into your machine after completing the setup. If you are having trouble connecting with SSH, watch our recent platform tour video (at 4:08) for a demo. Once connected, use this API call on your machine to start using the DeepCoder 14B Preview:
MODEL_NAME="agentica-org/DeepCoder-14B-Preview"
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "'$MODEL_NAME'",
"messages": [
{
"role": "user",
"content": "How to write a Python function that prints: Hyperstack is an amazing platform!"
}
]
}'
Troubleshooting DeepCoder 14B Preview
If you are having any issues, please follow the following instructions:
-
SSH into your VM.
-
Check the cloud-init logs with the following command: cat /var/log/cloud-init-output.log
- Use the logs to debug any issues.
Step 5: Hibernating Your VM
When you're finished with your current workload, you can hibernate your VM to avoid incurring unnecessary costs:
- In the Hyperstack dashboard, locate your Virtual machine.
- Look for a "Hibernate" option.
- Click to hibernate the VM, which will stop billing for compute resources while preserving your setup.
Why Deploy DeepCoder 14B Preview on Hyperstack?
Hyperstack is a cloud platform designed to accelerate AI and machine learning workloads. Here's why it's an excellent choice for deploying DeepCoder 14B Preview:
- Availability: Hyperstack provides access to the latest and most powerful GPUs such as the NVIDIA H100 on-demand, specifically designed to handle large language models.
- Ease of Deployment: With pre-configured environments and one-click deployments, setting up complex AI models becomes significantly simpler on our platform.
- Scalability: You can easily scale your resources up or down based on your computational needs.
- Cost-Effectiveness: You pay only for the resources you use with our cost-effective cloud GPU pricing.
- Integration Capabilities: Hyperstack provides easy integration with popular AI frameworks and tools.
Explore More Tutorials
New to Hyperstack? Log in to Get Started with Our Ultimate Cloud GPU Platform Today!
FAQs
What is DeepCoder-14B-Preview?
It is a 14B-parameter LLM designed for advanced code reasoning, fine-tuned using reinforcement learning techniques.
How does DeepCoder-14B Preview perform compared to other models?
DeepCoder-14B-Preview achieves a 60.6% Pass@1 on LiveCodeBench v5, outperforming its base model and matching OpenAI's o3-mini.
What datasets were used for training DeepCoder-14B Preview?
The model was trained on approximately 24,000 problem-test pairs from Taco-Verified, PrimeIntellect SYNTHETIC-1, and LiveCodeBench v5.
Can DeepCoder-14B Preview handle long code contexts?
Yes, it supports context lengths up to 64K tokens, making it suitable for extensive codebases.
Where can I access DeepCoder-14B-Preview?
You can easily access the latest DeepCoder-14B Preview on Hugging Face.