In the world of AI, running local language models is increasingly accessible for Windows users. Thanks to tools like Ollama and Open WebUI, you can now deploy advanced models on your system. Ollama specializes in making large language models (LLMs) accessible through a command-line interface, and Open WebUI provides a user-friendly platform that integrates these models via Docker, making it a powerful duo for local AI experimentation.
This post provides a detailed walkthrough to install and configure Ollama on Windows, set up Open WebUI using Docker Desktop, and link them to work together. Let’s dive in!
Prerequisites
Before starting, make sure your system meets the following requirements:
- Windows 10 or 11 with Docker Desktop installed.
- Sufficient system resources (RAM, disk space) for handling LLMs.
- Python 3.11 (necessary for some Open WebUI configurations).

Step 1: Install Ollama on Windows
The first step is to install Ollama, which simplifies the setup of LLMs on your local machine. While Ollama was initially designed for macOS, recent versions now support Windows installation, making it a convenient tool for running models on the command line.
- Download and Install Ollama
Go to the Ollama download page and download the Windows installer. Run the downloaded installer and follow the prompts to complete the installation. - Verify the Installation
After installation, open Command Prompt or PowerShell and check if Ollama is correctly installed by typingPowerShell, run:
ollama --version
You should see the installed version of Ollama displayed, indicating a successful installation. - Test Ollama with a Language Model
Ollama’s CLI lets you pull and run models directly. Start by testing with a basic model like Llama2:PowerShell, run:
ollama run llama2
This command will download the model if it’s not already present and load it for testing. Use interactive prompts to ensure that the model is running smoothly on your system.

Step 2: Install Docker Desktop on Windows
Docker Desktop is necessary to run Open WebUI on Windows. Docker’s containerization makes it easy to deploy applications like Open WebUI that require isolated environments.
- Download and Install Docker Desktop
Go to Docker’s official site and download the Docker Desktop installer for Windows. When installing, make sure to enable WSL 2 integration to leverage enhanced performance for Docker containers on Windows. You will need to have virtualization enabled in the BIOS, you can also use Hyper-V if already installed. - Launch Docker Desktop
After installation, open Docker Desktop and verify that it’s running properly.PowerShell, run:
docker --version
This should display Docker’s version, confirming that Docker Desktop is installed and active.

Step 3: Set Up Open WebUI in Docker Desktop
With Docker installed, you can now set up Open WebUI, a web-based interface that simplifies interaction with LLMs.
- Pull the Open WebUI Docker Image
In PowerShell, use the following command to pull the latest Open WebUI image from Docker Hub:PowerShell, run:
docker pull ghcr.io/open-webui/open-webui:main - Run Open WebUI as a Docker Container
Start Open WebUI by running a Docker container. The command below links Open WebUI with Ollama by setting theOLLAMA_BASE_URL
environment variable. It maps Open WebUI to port 7860PowerShell, run:
docker run -d -p 7860:7860 -e OLLAMA_BASE_URL=http://host.docker.internal:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
This setup ensures that Open WebUI can communicate with Ollama’s API over Docker’s internal network. Docker Desktop automatically routeshost.docker.internal
to the Windows host, where Ollama’s API is running. - Access Open WebUI
Open a web browser and go tohttp://localhost:7860
to verify that Open WebUI is running. You should see the Open WebUI interface, where you can load models and send prompts.


Step 4: Configure Ollama and Open WebUI to Work Together
Ollama provides an API endpoint compatible with OpenAI’s API format, making integration with Open WebUI straightforward. Here’s how to set up API communication between Ollama and Open WebUI.
- Set Environment Variables
TheOLLAMA_BASE_URL
environment variable, specified in the Docker run command, directs Open WebUI to Ollama’s API. Confirm this setting in Docker to ensure that Open WebUI can access models loaded in Ollama. - Load Models in Open WebUI
With both tools running, navigate to Open WebUI’s model management page and select available models. This will allow Open WebUI to access models served through Ollama’s API, enabling you to send prompts and receive responses. - Test the Integration
Try loading a model and sending a prompt from Open WebUI. Open WebUI will relay this prompt to Ollama, and you should receive a response if the integration is configured correctly.
Optional: Automate the Setup with a Batch Script
To streamline the setup, you can create a batch file to start Ollama’s API server and Open WebUI with a single command. Here’s an example batch script:
Create batch File
code@echo off
start /min ollama serve
docker start open-webui
start http://localhost:7860
This script will:
- Start Ollama in the background.
- Launch the Open WebUI Docker container.
- Open the Open WebUI interface in your browser.
Save this file with a .bat
extension and run it whenever you want to start both services quickly.
Accessing Open WebUI Remotely (Advanced)
If you want to access Open WebUI from other devices on your network, follow these steps:
- Set Up Inbound Firewall Rules
Create an inbound rule to allow traffic on port 7860. Open the Windows Firewall settings, create a new rule, and configure it to accept incoming TCP traffic on port 7860. - Enable Port Forwarding
In PowerShell (Administrator), create a port proxy for the Open WebUI port. Use the following command, replacing<your_ip_address>
with your Windows machine’s IP.PowerShell, run:
netsh interface portproxy add v4tov4 listenport=7860 listenaddress=0.0.0.0 connectport=7860 connectaddress=<your_ip_address> - Access Open WebUI Remotely
On another device, go tohttp://<your_ip_address>:7860
in a web browser to access Open WebUI from anywhere on your network.
Conclusion
Setting up Ollama and Open WebUI on Windows using Docker Desktop opens up exciting possibilities for running and experimenting with language models locally. By leveraging Docker’s containerization and Ollama’s efficient model-serving capabilities, you can run powerful LLMs directly on your machine, securely and without relying on external APIs.