Skip to content

Instantly share code, notes, and snippets.

View Ghostfinder911's full-sized avatar

Ghostfinder911 Ghostfinder911

View GitHub Profile
@Ghostfinder911
Ghostfinder911 / README.md
Created September 23, 2025 17:49 — forked from Artefact2/README.md
GGUF quantizations overview

Which GGUF is right for me? (Opinionated)

Good question! I am collecting human data on how quantization affects outputs. See here for more information: ggml-org/llama.cpp#5962

In the meantime, use the largest that fully fits in your GPU. If you can comfortably fit Q4_K_S, try using a model with more parameters.

llama.cpp feature matrix

See the wiki upstream: https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix