Ollama

Ollama

Run DeepSeek-R1, Qwen 3, Llama 3.3, Qwen 2.5‑VL, Gemma 3, and other models, locally.

Setup

# Install
    curl -fsSL https://ollama.com/install.sh | sh
    ollama serve

    # Docker
    docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

    # OR in MS Windows
    # Install using OllamaSetup.exe

# List models in your computer.
ollama list

# Fetch model 'llama3.1:latest'. Complete list at: https://github.com/ollama/ollama#model-library
ollama pull llama3.1:latest

# Run interactive mode with model
ollama run llama3.1:latest