AI/ML

How to Deploy Gemma 2 Model with Ollama & Docker : Step by Step Guide

google-gemma-ai-1
Gemma Model for your Business?
  • check icon

    Cost Efficiency (Open Source)

  • check icon

    Lower Long Term costs

  • check icon

    Customised data control

  • check icon

    Pre-trained model

Read More

Get Your Gemma AI Model Running in a Day


Free Installation Guide - Step by Step Instructions Inside!

Introduction

Gemma 2 is an advanced AI model designed for high-quality text generation. By deploying it in a Docker container with Ollama, you can easily manage and run this model in a flexible and scalable environment.

Setting Up the Environment

Install Docker

To begin, ensure Docker is installed and running on your machine. If it's not yet installed, execute the following commands:

sudo apt update && sudo apt upgrade -ysudo apt install docker.io -ysudo systemctl start dockersudo systemctl enable docker

This will set up Docker and ensure it starts on boot.

Deploying the Ollama Runtime

Run the Ollama Container

The next step is to launch the Ollama runtime in Docker, which is required to run Gemma 2:

docker run -d --name ollama -p 11434:11434 -v ollama:/root/.ollama ollama/ollama

 

This command sets up the Ollama container and exposes the necessary ports.

Download the Gemma 2 Model

Once the Ollama container is running, you can download the Gemma 2 model with the following command:

docker exec -it ollama /bin/bashollama pull gemma:2

 

This step downloads Gemma 2 along with all required dependencies for its operation.

Running Gemma 2

Start Gemma 2 Model

With the model successfully installed, you can start it by running:

ollama run gemma:2

Once it’s up and running, try a simple test by inputting a prompt:

>>> What are the key developments in machine learning?

Web Interface for Gemma 2

For an enhanced user experience, you can set up a web interface to interact with Gemma 2. Use this command to launch the UI:

docker run -d --name gemma-web-ui -p 4000:8080 -eOLLAMA_BASE_URL=http://localhost:11434 -v gemmawebui:/app/backend/data --restart always ghcr.io/open-webui/open-webui:main

You can access the UI in your browser by navigating to

http://localhost:4000.

 

Conclusion

Deploying Gemma 2 in Docker using Ollama provides a seamless and efficient way to run this powerful AI model. Whether you’re using the command line or a web interface, this method ensures smooth operation with minimal setup.

 

Ready to transform your business with our technology solutions? Contact Us  today to Leverage Our AI/ML Expertise. 

0

AI/ML

Related Center Of Excellence