Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save stevewithington/2a6c15071e48ad96860210a04a4967b2 to your computer and use it in GitHub Desktop.

Select an option

Save stevewithington/2a6c15071e48ad96860210a04a4967b2 to your computer and use it in GitHub Desktop.

Revisions

  1. @BretFisher BretFisher revised this gist May 9, 2025. 1 changed file with 5 additions and 0 deletions.
    5 changes: 5 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -9,3 +9,8 @@
    3. Run the compose.yaml here to startup the Open WebUI on port 3000
    - you can run my published compose file directly (without needing to save the YAML locally) with `docker compose -f oci://bretfisher/openwebui up`
    5. Create an admin user and login at http://localhost:3000

    More info:
    My YouTube Short on getting started: https://youtube.com/shorts/DRbLUL50-wU
    My YouTube full details video: https://www.youtube.com/watch?v=3p2uWjFyI1U
    Docker Docs: https://docs.docker.com/compose/how-tos/model-runner/
  2. @BretFisher BretFisher revised this gist May 9, 2025. 1 changed file with 0 additions and 0 deletions.
    Binary file modified Docker Model Runner Reference.excalidraw.png
    Loading
    Sorry, something went wrong. Reload?
    Sorry, we cannot display this file.
    Sorry, this file is invalid so it cannot be displayed.
  3. @BretFisher BretFisher revised this gist May 9, 2025. 1 changed file with 0 additions and 0 deletions.
    Binary file added Docker Model Runner Reference.excalidraw.png
    Loading
    Sorry, something went wrong. Reload?
    Sorry, we cannot display this file.
    Sorry, this file is invalid so it cannot be displayed.
  4. @BretFisher BretFisher revised this gist May 9, 2025. 1 changed file with 1 addition and 2 deletions.
    3 changes: 1 addition & 2 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -2,11 +2,10 @@

    1. Enable Docker Model Runner (v4.40 or newer) in Settings or run the command:
    - `docker desktop enable model-runner --no-tcp`
    2. Download some models from https://hub.docker.com/u/ai
    2. Download some models from https://hub.docker.com/u/ai (or let the compose file below pull one for you)
    - `docker model pull ai/qwen2.5:0.5B-F16`
    - `docker model pull ai/smollm2:latest`
    - Be sure to only download models that you have the VRAM to run :)
    3. Run the compose.yaml here to startup the Open WebUI on port 3000
    - you can run my published compose file directly (without needing to save the YAML locally) with `docker compose -f oci://bretfisher/openwebui up`
    5. Create an admin user and login at http://localhost:3000

  5. @BretFisher BretFisher revised this gist May 9, 2025. 2 changed files with 18 additions and 3 deletions.
    3 changes: 2 additions & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -8,4 +8,5 @@
    - Be sure to only download models that you have the VRAM to run :)
    3. Run the compose.yaml here to startup the Open WebUI on port 3000
    - you can run my published compose file directly (without needing to save the YAML locally) with `docker compose -f oci://bretfisher/openwebui up`
    5. Create an admin user and login at http://localhost:3000
    5. Create an admin user and login at http://localhost:3000

    18 changes: 16 additions & 2 deletions compose.yaml
    Original file line number Diff line number Diff line change
    @@ -1,3 +1,9 @@
    # This is a Docker Compose file for running the Open WebUI with a specific model.
    # It uses the new provider feature of Compose to specify the model to be downloaded.
    # Note that Open WebUI lets you select any downloaded model, but it won't auto-download them
    # so the provider service will ensure it's downloaded first.
    # https://docs.docker.com/compose/how-tos/model-runner/

    services:
    open-webui:
    image: ghcr.io/open-webui/open-webui:main
    @@ -8,8 +14,16 @@ services:
    - OPENAI_API_KEY=na
    volumes:
    - open-webui:/app/backend/data
    restart: always
    depends_on:
    - ai-runner

    ai-runner:
    provider:
    type: model
    options:
    model: ai/gemma3-qat:1B-Q4_K_M # Quantized. Neesds needs 1GB of GPU memory
    # model: ai/gemma3:4B-F16 # needs at leats 8GB of GPU memory
    # https://hub.docker.com/r/ai/gemma3

    volumes:
    open-webui:

  6. @BretFisher BretFisher revised this gist Apr 11, 2025. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -7,4 +7,5 @@
    - `docker model pull ai/smollm2:latest`
    - Be sure to only download models that you have the VRAM to run :)
    3. Run the compose.yaml here to startup the Open WebUI on port 3000
    4. http://localhost:3000
    - you can run my published compose file directly (without needing to save the YAML locally) with `docker compose -f oci://bretfisher/openwebui up`
    5. Create an admin user and login at http://localhost:3000
  7. @BretFisher BretFisher revised this gist Apr 10, 2025. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    # How to use this compose file
    # How to use this compose file to run Open WebUI on a local LLM running with Docker Model Runner

    1. Enable Docker Model Runner (v4.40 or newer) in Settings or run the command:
    - `docker desktop enable model-runner --no-tcp`
  8. @BretFisher BretFisher revised this gist Apr 10, 2025. 1 changed file with 10 additions and 0 deletions.
    10 changes: 10 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,10 @@
    # How to use this compose file

    1. Enable Docker Model Runner (v4.40 or newer) in Settings or run the command:
    - `docker desktop enable model-runner --no-tcp`
    2. Download some models from https://hub.docker.com/u/ai
    - `docker model pull ai/qwen2.5:0.5B-F16`
    - `docker model pull ai/smollm2:latest`
    - Be sure to only download models that you have the VRAM to run :)
    3. Run the compose.yaml here to startup the Open WebUI on port 3000
    4. http://localhost:3000
  9. @BretFisher BretFisher created this gist Apr 10, 2025.
    15 changes: 15 additions & 0 deletions compose.yaml
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,15 @@
    services:
    open-webui:
    image: ghcr.io/open-webui/open-webui:main
    ports:
    - "3000:8080"
    environment:
    - OPENAI_API_BASE_URL=http://model-runner.docker.internal:80/engines/llama.cpp/v1
    - OPENAI_API_KEY=na
    volumes:
    - open-webui:/app/backend/data
    restart: always

    volumes:
    open-webui: