Releases: githubcto/llama.cpp
Releases · githubcto/llama.cpp
SD3GGUF-b3986-89406a9
llama-quantize.exe for SD SDXL SD3 FLUX SD3.5
GGUF lcpp_sd3.patch patched
GPU_TARGETS="gfx1100;gfx1101;gfx1102;gfx1030;gfx906"
llama-quantize.exe for SD SDXL SD3 FLUX qt-b3923-a58a0a4
llama-quantize.exe for SD SDXL SD3 FLUX
GGUF lcpp.patch hand patched
b3920
add ROCm6 gfx1100;gfx1101;gfx1102;gfx1030;gfx906
b3917
server : handle "logprobs" field with false value (#9871) Co-authored-by: Gimling <[email protected]>