Cost Efficiency (Open Source)
Lower Long Term costs
Customised data control
Pre-trained model
Get Your OpenThinker 7B AI Model Running in a Day
OpenThinker 7B is a powerful open-source language model that can be deployed efficiently in a containerized environment using Ollama. Running it inside a Docker container provides portability, scalability, and reproducibility, making it ideal for development, testing, and production use cases.
In this guide, we will cover:
Installing Docker and Ollama
Creating a Docker container for OpenThinker 7B
Running and interacting with the model inside the container
Using the model via CLI and Python
This step-by-step approach ensures that OpenThinker 7B runs smoothly while avoiding dependency conflicts.
Docker allows you to create and manage containerized environments, ensuring the model runs consistently across different systems.
Install Docker on Ubuntu
Run the following commands to install Docker:
sudo apt update && sudo apt upgrade -y
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Verify Installation
To confirm Docker is installed, check the version:
docker --version
You should see an output similar to:
Docker version 24.0.5, build abcdef
If you're using Windows or macOS, download and install Docker Desktop from the official Docker website.
Ollama is a lightweight framework optimized for running large language models in an efficient way.
Pull the Ollama Docker Image
Since we are using Docker, we will pull the official Ollama image from Docker Hub:
docker pull ollama/ollama
Once downloaded, verify the image is available:
docker images | grep ollama
You should see output like:
REPOSITORY TAG IMAGE ID CREATED SIZE
ollama/ollama latest 123456abcdef 2 days ago 3.2GB
This confirms that Ollama is ready to use.
Now, let's create a Dockerfile that will install OpenThinker 7B inside a Docker container.
Create a New Directory for Your Project
Navigate to a working directory and create a new folder for the project:
mkdir OpenThinker-7B-Docker && cd OpenThinker-7B-Docker
Create the Dockerfile
Inside the new directory, create a Dockerfile:
touch Dockerfile
nano Dockerfile
Now, add the following content:
# Use the official Ollama image as base
FROM ollama/ollama
# Download OpenThinker 7B inside the container
RUN ollama pull OpenThinker/OpenThinker-7B
# Expose the default port
EXPOSE 11434
# Set Ollama as the entry point
ENTRYPOINT ["ollama", "serve"]
Explanation of the Dockerfile
Save the file (CTRL + X, then Y, then Enter).
Run the following command to build the image:
docker build -t openthinker-7b .
Once completed, check if the image is built successfully:
docker images | grep openthinker-7b
Expected output:
REPOSITORY TAG IMAGE ID CREATED SIZE
openthinker-7b latest 789xyzabcdef 5 minutes ago 3.5GB
Run the Docker Container
Start the container in detached mode:
docker run -d --name openthinker_container -p 11434:11434 openthinker-7b
Here’s what the command does:
Verify if the container is running:
docker ps
Expected output:
CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES
abcd1234 openthinker-7b "ollama serve" Up 2 mins 0.0.0.0:11434->11434/tcp openthinker_container
Using Ollama CLI
Now that the container is running, you can interact with OpenThinker 7B via Ollama CLI:
ollama run OpenThinker-7B "What is the significance of deep learning in AI?"
Expected output:
Deep learning is a subset of machine learning that utilizes neural networks with multiple layers...
Using Python Inside the Container
You can also interact with the model programmatically using Python:
import ollama
response = ollama.chat("OpenThinker-7B", "Explain reinforcement learning.")
print(response)
Expected output:
Reinforcement learning is a type of machine learning where an agent learns by interacting with an environment...
To stop the running container:
docker stop openthinker_container
To remove the container:
docker rm openthinker_container
To remove the Docker image:
docker rmi openthinker-7b
Running OpenThinker 7B inside a Docker container using Ollama ensures a streamlined, isolated and portable deployment environment. This approach eliminates dependency issues and makes it easier to scale the model across different systems.
Key Takeaways:
Ready to transform your business with our technology solutions? Contact Us today to Leverage Our AI/ML Expertise.