This repository has been archived by the owner on Jun 24, 2024. It is now read-only.
Support for GGML BLAS #190
Labels
topic:backend-support
Support for alternate non-GGML backends, or for particular GGML backend features
It would be great if BLAS could be enabled in GGML by the user. Enabling BLAS does significantly enhance the performance of GGML. I have tested llama.cpp/GGML with cublas, and it really pays off enabling it.
The text was updated successfully, but these errors were encountered: