AI/ML

Complete Guide to Installing Dolphin-Mixtral on Azure VM Using Docker and Ollama

Dolphin-Mixtral Model for your Business?
  • check icon

    Cost Efficiency (Open Source)

  • check icon

    Lower Long Term costs

  • check icon

    Customised data control

  • check icon

    Pre-trained model

Read More

Get Your Dolphin Mistral AI Model Running in a Day


Free Installation Guide - Step by Step Instructions Inside!

Introduction

Dolphin-Mixtral is a powerful AI model designed for advanced natural language processing. Hosting it in a Docker container on an Azure Virtual Machine (VM) allows for seamless deployment, scalability and an isolated execution environment.

Provisioning the Azure VM

Ensure you have an Ubuntu-based Azure VM ready. If you haven’t created one yet, set it up via the Azure Portal or Azure CLI.

Once the VM is running, connect to it via SSH:

ssh -i your-private-key.pem azure-user@your-vm-ip

 

Update your system packages before proceeding:

sudo apt update && sudo apt upgrade -y

 

If Docker is not installed, install it using:

sudo apt install docker.io -ysudo systemctl start dockersudo systemctl enable docker

 

Deploying Ollama

Now, pull and run the Ollama container, which will manage Dolphin-Mixtral:

docker run -d --name ollama -p 11434:11434 -v ollama:/root/.ollama ollama/ollama

Verify the container is running:

docker ps

 

Fetching Dolphin-Mixtral Model

Once the Ollama container is running, install Dolphin-Mixtral inside it:

docker exec -it ollama /bin/bash

 

Now write this command from inside the container:

ollama pull dolphin-mixtral

 

This will download all necessary dependencies.

Running Dolphin-Mixtral

To start using the model, execute:

ollama run dolphin-mixtral 

Test the model with a prompt:

>>> How does blockchain technology work?

 

If the response is generated, the model is functioning correctly.

Web-Based Interaction with Dolphin-Mixtral

To access Dolphin-Mixtral through a browser-based UI, deploy Open WebUI:

docker run -d --name ollama-ui -p 5050:8080 -eOLLAMA_BASE_URL=http://<YOUR-VM-IP>:11434 -vopen-webui:/app/backend/data --restartalways ghcr.io/open-webui/open-webui:main

 

Now, navigate to http://<YOUR-VM-IP>:5050 to interact with the model via a user-friendly interface.

Final Thoughts

Deploying Dolphin-Mixtral on an Azure VM using Docker & Ollama ensures efficient performance in an isolated environment. Whether you prefer CLI based interactions or a web UI, this setup offers a seamless AI deployment experience.

 

Ready to transform your business with our technology solutions? Contact Us  today to Leverage Our AI/ML Expertise. 

0

AI/ML

Related Center Of Excellence