Cost Efficiency (Open Source)
Lower Long Term costs
Customised data control
Pre-trained model
Get Your OpenThinker 7B AI Model Running in a Day
Hugging Face provides a great platform for hosting large language models (LLMs), including OpenThinker 7B. By downloading and running the model using Docker, you can ensure that the environment is consistent, portable and easy to scale. In this guide, we will focus on pulling the OpenThinker 7B model directly from Hugging Face's model hub using Docker.
Before we can download OpenThinker 7B, we need to have Docker installed on our system. If you haven't installed Docker yet, follow these steps for your platform:
For Ubuntu:
sudo apt update && sudo apt upgrade -y
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Verify Installation:
To confirm that Docker is installed, run:
docker --version
This should output something like:
Docker version 24.0.5, build [Build ID]
To run OpenThinker 7B with Docker, we will use a Hugging Face supported Docker image. You can pull the official Hugging Face transformers image from Docker Hub.
Run the following command to pull the image:
docker pull huggingface/transformers
Once the image is pulled, verify by listing the available images:
docker images | grep transformers
Expected output:
REPOSITORY TAG IMAGE ID CREATED SIZE
huggingface/transformers latest [Image ID] 2 days ago 5.2GB
Now, let's create a custom Dockerfile that will set up OpenThinker 7B in the Hugging Face Docker container.
Create a New Directory for Your Project:
mkdir OpenThinker-7B-Docker-HuggingFace && cd OpenThinker-7B-Docker-HuggingFace
Create the Dockerfile:
Create a file named Dockerfile in this directory:
touch Dockerfile
nano Dockerfile
Add the following content to the Dockerfile:
# Use Hugging Face's official transformer image as base
FROM huggingface/transformers
# Install necessary dependencies
RUN pip install --upgrade pip && pip install torch transformers
# Download OpenThinker 7B model from Hugging Face Hub
RUN python -c "from transformers import AutoModelForCausalLM, AutoTokenizer;
model = AutoModelForCausalLM.from_pretrained('OpenThinker/OpenThinker-7B');
tokenizer = AutoTokenizer.from_pretrained('OpenThinker/OpenThinker-7B')"
# Expose the necessary port
EXPOSE 5000
# Start the model server
CMD ["python", "app.py"]
This Dockerfile does the following:
Press CTRL + X, then Y and hit Enter to save the file.
Run the following command to build the image:
docker build -t openthinker-7b-huggingface .
After the build process completes, verify the image:
docker images | grep openthinker-7b-huggingface
Expected output:
REPOSITORY TAG IMAGE ID CREATED SIZE
openthinker-7b-huggingface latest [IMAGE ID] 10 minutes ago 6GB
Now, you can run the container with the following command:
docker run -d --name openthinker_huggingface -p 5000:5000 openthinker-7b-huggingface
This command:
Verify the Container is Running:
docker ps
Expected output:
CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES
[Container ID] openthinker-7b-huggingface "python app.py" Up 2 minutes 0.0.0.0:5000->5000/tcp openthinker_huggingface
You can now interact with OpenThinker 7B via HTTP requests to port 5000. To check if the model is running:
curl http://localhost:5000
Expected output:
{"message": "Model is up and running"}
Alternatively, you can interact programmatically using Python:
import requests
response = requests.post("http://localhost:5000", json={"text": "What is the significance of deep learning in AI?"})
print(response.json())
Expected output:
{"response": "Deep learning is a subset of machine learning that utilizes neural networks..."}
To stop the container:
docker stop openthinker_huggingface
To remove the container:
docker rm openthinker_huggingface
To remove the Docker image:
docker rmi openthinker-7b-huggingface
Downloading and running OpenThinker 7B via Docker and Hugging Face provides a seamless environment for large language model deployment. It ensures that the model can be easily reproduced across different systems, and by using Docker, you avoid dependency issues and simplify the setup process.
Ready to transform your business with our technology solutions? Contact Us today to Leverage Our AI/ML Expertise.