How to Install DeepSeek-R1 on Windows: A Step-by-Step Guide
Installing DeepSeek-R1 on Windows allows you to run AI models locally, ensuring privacy, offline access, and customization. Whether you’re a developer looking to integrate DeepSeek-R1 into applications or a general user seeking AI assistance without cloud dependencies, this guide will walk you through the installation and setup process.

Part 1: Install Ollama (Prerequisite for Running DeepSeek-R1)
To run DeepSeek-R1 on Windows, you first need to install Ollama, a tool that simplifies running large language models (LLMs) locally.
Step 1: Download and Install Ollama
1. Download Ollama from the Download Ollama on macOS and get the Windows executable file.
2. Run the installer by double-clicking the .exe file and following the setup prompts.
3. Verify installation by opening Command Promptand running:
- ollama --version
If the version number appears, Ollama is installed successfully.
Step 2: Enable Windows Subsystem for Linux (WSL) (Optional, but Recommended)
For better performance, consider enabling Windows Subsystem for Linux (WSL 2):
1. Go to Control Panel > Programs > Turn Windows Features On/Off.
2. Enable Windows Subsystem for Linux and restart your computer.
3. Update WSL by running:
- wsl --update
Part 2: Download the DeepSeek-R1 Model
DeepSeek-R1 offers multiple model sizes to match different hardware capabilities. Choose based on your system’s RAM and GPU power.
DeepSeek-R1 Model Options
Model Size | RAM Requirement | Disk Space |
---|---|---|
1.5B | 8GB+ RAM | 1.5GB |
7B | 16GB+ RAM | 4.7GB |
14B | 32GB+ RAM | 9GB |
70B | 64GB+ RAM | 43GB |
Step 1: Pull the DeepSeek-R1 Model
1. Open Command Prompt and run:
- ollama pull deepseek-r1:[MODEL_SIZE]
Replace [MODEL_SIZE] with your choice (e.g., 1.5b, 7b, 14b).
Example command for the 7B model:
- ollama pull deepseek-r1:7b
2. Wait for the download to complete. The process may take several minutes, depending on your internet speed.
Part 3: Run DeepSeek-R1 via Command Line
Once downloaded, you can start DeepSeek-R1 directly from the command line.
Step 1: Launch DeepSeek-R1
Run the following command to start the AI model:
- ollama run deepseek-r1:[MODEL_SIZE]
Example for running the 7B model:
- ollama run deepseek-r1:7b
Step 2: Test DeepSeek-R1 with a Prompt
You can interact with the AI by entering a sample prompt:
- echo "Explain quantum computing in simple terms" | ollama run deepseek-r1:7b
Note: By default, command-line usage doesn’t save chat history. If you need persistent chat logs, consider setting up a graphical interface.
Part 4: Set Up a Graphical User Interface (GUI) for DeepSeek-R1
For a more user-friendly AI experience, use a web-based GUI like Open WebUI or Chatbox AI.
Option 1: Open WebUI via Docker
1. Install Docker Desktop and sign in.
2. Run the following command in Command Promptto set up Open WebUI:
- docker run -d -p 3000:8080
--add-host=host.docker.internal:host-gateway -v
open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
3. Access the UIby opening your browser and visiting:
- http://localhost:3000
Option 2: Chatbox AI
1. Download Chatbox AI from its Prompt.io Texting Platform.
2. Open Chatbox AI and navigate to Settings.
3. Select DeepSeek API and enter your API key (optional for local use).
Part 5: Integrate DeepSeek-R1 into Python Applications
For developers, DeepSeek-R1 can be integrated into Python applications for advanced AI-powered workflows.
Step 1: Install Required Libraries
- pip install ollama
Step 2: Create a Python Script
import ollama response = ollama.chat( model='deepseek-r1:7b', messages=[{'role': 'user', 'content': 'Write a Python function for Fibonacci numbers.'}] ) print(response['message']['content'])
This script sends a prompt to DeepSeek-R1 and prints the response.
Part 6: Troubleshooting Common Issues
Model Not Found
Ensure Ollama is running by executing:
- ollama serve
Out-of-Memory Errors
If RAM usage is too high, switch to a smaller model (e.g., use 1.5B instead of 7B).
WSL Errors
Run wsl --update to ensure Windows Subsystem for Linux is up to date.
API Key Issues (For GUI Users)
For Chatbox AI, you can use DeepSeek-R1 locally without an API key.
Part 7: Why Install DeepSeek-R1 Locally?
Installing DeepSeek-R1 on Windows provides several advantages:
Privacy
- Run AI models locally, keeping sensitive data on your device.
Cost Savings
- Avoid expensive API fees (DeepSeek’s $0.55 per million tokens is significantly cheaper than OpenAI’s $15 per million tokens).
Offline Access
- No internet connection required once installed.
Conclusion
By following this guide, you can successfully install DeepSeek-R1 on Windows and use it for a variety of AI-powered tasks, from coding assistance to research applications.
For advanced configurations, such as GPU acceleration or fine-tuning, refer to Ollama’s documentation or explore Docker-based setups.
Now that you have DeepSeek-R1 running on Windows, start exploring its capabilities and AI potential today!
Share this article:
Select the product rating:
Daniel Walker
Editor-in-Chief
My passion lies in bridging the gap between cutting-edge technology and everyday creativity. With years of hands-on experience, I create content that not only informs but inspires our audience to embrace digital tools confidently.
View all ArticlesLeave a Comment
Create your review for HitPaw articles