Skip to content
This repository has been archived by the owner on Jun 24, 2024. It is now read-only.

Support for GGML BLAS #190

Closed
darxkies opened this issue May 7, 2023 · 5 comments
Closed

Support for GGML BLAS #190

darxkies opened this issue May 7, 2023 · 5 comments
Labels
topic:backend-support Support for alternate non-GGML backends, or for particular GGML backend features

Comments

@darxkies
Copy link
Contributor

darxkies commented May 7, 2023

It would be great if BLAS could be enabled in GGML by the user. Enabling BLAS does significantly enhance the performance of GGML. I have tested llama.cpp/GGML with cublas, and it really pays off enabling it.

@darxkies darxkies changed the title Support for GGLM BLAS Support for GGML BLAS May 7, 2023
@darxkies
Copy link
Contributor Author

darxkies commented May 7, 2023

cublas-patch.txt

Ugly patch to add support for cuBLAS.

@virajk31
Copy link

Thanks, can CBLAS be enabled as well ?

@darxkies
Copy link
Contributor Author

CLBlast can be enabled for Linux and Windows using the PR #282.

@philpax philpax added the topic:backend-support Support for alternate non-GGML backends, or for particular GGML backend features label Jun 15, 2023
@philpax
Copy link
Collaborator

philpax commented Jun 19, 2023

Is this done now?

@darxkies
Copy link
Contributor Author

From my POV, it is.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
topic:backend-support Support for alternate non-GGML backends, or for particular GGML backend features
Projects
None yet
Development

No branches or pull requests

3 participants