Running Deepseek R1 on my Laptop CPU (No GPU).

Running DeepSeek on a CPU with Ollama and Docker
AI Concept Image

Running DeepSeek on a CPU with Ollama and Docker

Note: Everything Written Here is from LLMs (OpenAI and Deepseek)

Introduction to DeepSeek

DeepSeek is a powerful AI tool designed for natural language processing and deep learning tasks, often relying on GPUs to accelerate computation. However, not everyone has access to high-performance GPUs, and DeepSeek's adaptability allows it to be deployed on CPU-only systems. In this blog post, I'll demonstrate how to run DeepSeek on a self-hosted server, specifically an 11th Gen Intel i5 laptop CPU. We'll leverage Ollama for model optimization and Docker for containerized deployment, ensuring an efficient and streamlined setup. Whether you're exploring AI for personal projects or lightweight applications, this guide will help you make the most of your hardware resources.

Installing Docker on Linux, macOS, and Windows

Docker is a powerful tool for containerization, making it easy to run and deploy applications in isolated environments. Here's how to install Docker on the three major operating systems.


1. Installing Docker on Linux

For Ubuntu, Debian, and similar distributions:

Step 1: Update your system
sudo apt update
sudo apt upgrade -y
Step 2: Install required dependencies
sudo apt install -y ca-certificates curl gnupg
Step 3: Add Docker’s official GPG key and repository
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Step 4: Install Docker
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Step 5: Start Docker and enable it on boot
sudo systemctl start docker
sudo systemctl enable docker
Step 6: Verify installation
docker --version

2. Installing Docker on macOS

Step 1: Download Docker Desktop
Step 2: Install Docker
  • Open the downloaded .dmg file.
  • Drag the Docker icon into the Applications folder.
Step 3: Start Docker
  • Launch Docker from the Applications folder.
  • Follow the on-screen instructions to complete the setup.
Step 4: Verify installation

Open a terminal and run:

docker --version

3. Installing Docker on Windows

Step 1: Download Docker Desktop
Step 2: Install Docker
  • Run the downloaded .exe file.
  • Follow the installation wizard.
  • During the installation, ensure the option Enable WSL 2 features is selected (required for Windows 10/11).
Step 3: Start Docker
  • Launch Docker Desktop from the Start Menu.
  • Sign in with your Docker Hub account or create one.
Step 4: Verify installation

Open PowerShell or Command Prompt and run:

docker --version

Post-Installation Tips

Add Your User to the Docker Group (Linux):
sudo usermod -aG docker $USER

Log out and back in to apply changes.

Test Docker Installation:

Run a test container:

docker run hello-world
Install Docker Compose (if not included):
docker compose version

Install a Frontend for the LLM

After setting up Docker and Ollama, install a frontend like Chatbox.ai or open-webui for a user-friendly chat interface.

Open-WEBUI Screenshot

Open-WEBUI Interface

Installing Ollama

Ollama is a tool for running large language models (LLMs) locally. It simplifies model management and allows running advanced AI models on your hardware.

1. Installing Ollama on macOS

Ollama currently supports macOS natively. Here's how to install it:

Install Ollama via Homebrew:
brew install ollama/tap/ollama
Start the Ollama service:
ollama serve
Verify Installation:

Run the following command to confirm:

ollama --version

2. Installing Ollama on Windows or Linux

Ollama doesn't yet natively support Windows or Linux, but you can run it on these platforms via macOS virtualization or containerization solutions like Docker. Stay updated by visiting the Ollama official site.

Downloading and Running Different DeepSeek LLMs

Once Ollama is installed, you can easily install and run models like DeepSeek.

1. Install a Model

To install a model, use the ollama run command. This will pull the model if it's not already downloaded. For example, to install and run a DeepSeek model:

ollama run deepseek-r1:8b

(Replace 8b with the desired model size)

2. List Available Models

To see all installed models:

ollama list

3. Run a Model

To use a specific installed model:

ollama run <model_name>

Example:

ollama run deepseek-r1:8b

4. Managing Models

Delete a Model: If you need to remove a model to free up space:

ollama rm <model_name>

Example:

ollama rm deepseek-r1:8b

5. Testing and Using DeepSeek LLMs

You can interact with the DeepSeek models through the terminal after running them. For example:

ollama run deepseek-r1:8b

Then, type your input query to test the model's capabilities.

DeepSeek Models Available

DeepSeek provides multiple models optimized for various tasks. Common versions include:

# 1.5B version (smallest):
ollama run deepseek-r1:1.5b

# 8B version:
ollama run deepseek-r1:8b

# 14B version:
ollama run deepseek-r1:14b

# 32B version:
ollama run deepseek-r1:32b

# 70B version (biggest/smartest):
ollama run deepseek-r1:70b

This is the command to run and install a model from Ollama:

ollama run deepseek-r1:8b

Screenshots

Screenshot 1 Screenshot 2 Screenshot 3

Conclusion

In conclusion, running DeepSeek on an 11th Gen Intel i5 laptop CPU proves to be a practical solution for lightweight AI workloads. With the 8B model, the system achieves a processing speed of 1.5–2 words per second, making it perfectly suitable for small-scale applications. While it utilizes around 80–90% of the CPU during operation, the performance is stable and reliable, demonstrating that even modest hardware can power advanced language models effectively when optimized with tools like Ollama and Docker.

Further Reading

Read this for the comparison of the Models available: https://huggingface.co/deepseek-ai/DeepSeek-V3

0 Comments