You want to run Deepseek-r1 8B inside a Docker container for portability and isolation but are unsure about setting up Ollama within the container.
Create a Dockerfile to install Ollama inside a container.
Pull and run Deepseek-r1 8B inside the container.
Allow the container to interact with the host system for efficient execution.
Define a Dockerfile to install Ollama and set up Deepseek-r1 8B inside a container.
# Use an Ubuntu base image
FROM ubuntu:22.04
# Set environment variables to prevent interactive prompts
ENV DEBIAN_FRONTEND=noninteractive
# Install dependencies
RUN apt update && apt install -y curl
# Install Ollama
RUN curl -fsSL https://ollama.com/install.sh | sh
# Download and preload the Deepseek-r1 8B model
RUN ollama pull deepseek-r1:8b
# Expose the port for API access (optional)
EXPOSE 11434
# Run Ollama as the default process
CMD ["ollama", "serve"]
Navigate to the directory where the Dockerfile is saved and build the image:
docker build -t deepseek-r1-container .
This will create a Docker image named deepseek-r1-container.
Run the container and start Ollama:
docker run --rm -it -p 11434:11434 deepseek-r1-container
This starts the container and exposes Ollama’s API on port 11434.
Now that the container is running, you can execute prompts from outside the container using:
ollama run deepseek-r1:8b "Tell me about black holes."
OR, send a request via curl :
curl http://localhost:11434/api/generate -d '{
"model": "deepseek-r1:8b",
"prompt": "Explain quantum mechanics in simple terms"
}'
Running Deepseek-r1 8B in a Docker container provides portability, isolation, and easy deployment across multiple environments. You can now interact with the AI model using local commands or API requests.
Ready to transform your business with our technology solutions? Contact Us today to Leverage Our AI/ML Expertise.
0