Forked from mberman84/gist:9b3c281ae5e3e92b7e946f6a09787cde
Created
January 17, 2024 17:35
-
-
Save appyspaces/66821dd1a8e1ad0cd157d5ef989e5f0f to your computer and use it in GitHub Desktop.
Revisions
-
mberman84 created this gist
Nov 9, 2023 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,26 @@ # Clone the repo git clone https://github.com/imartinez/privateGPT cd privateGPT # Install Python 3.11 pyenv install 3.11 pyenv local 3.11 # Install dependencies poetry install --with ui,local # Download Embedding and LLM models poetry run python scripts/setup # (Optional) For Mac with Metal GPU, enable it. Check Installation and Settings section to know how to enable GPU on other platforms CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python # Run the local server PGPT_PROFILES=local make run # Note: on Mac with Metal you should see a ggml_metal_add_buffer log, stating GPU is being used # Navigate to the UI and try it out! http://localhost:8001/