Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reinstall for gemma 2 #1559

Closed
4 tasks done
etemiz opened this issue Jun 28, 2024 · 5 comments
Closed
4 tasks done

Reinstall for gemma 2 #1559

etemiz opened this issue Jun 28, 2024 · 5 comments

Comments

@etemiz
Copy link

etemiz commented Jun 28, 2024

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

Run without issue

Current Behavior

raise RuntimeError(f"Failed to load shared library '{_lib_path}': {e}")

RuntimeError: Failed to load shared library '...........lib/python3.10/site-packages/llama_cpp/libllama.so': libggml.so: cannot open shared object file: No such file or directory

Environment and Context

  • Operating System, e.g. for Linux:

Ubuntu22

  • SDK version, e.g. for Linux:

Python 3.10.12
GNU Make 4.3
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

Failure Information (for bugs)

Steps to Reproduce

I clone llama-cpp-python:

git clone --recurse-submodules https://github.com/abetlen/llama-cpp-python.git
pip install --upgrade pip
cd vendor
cd llama.cpp
git checkout b3262
pip install -e .

it installs llama-cpp-python but my script does not work and gives the error above (Failed to load shared library).

make clean
CMAKE_ARGS="-DLLAVA_BUILD=off -DGGML_HIPBLAS=on" python -m pip install --force-reinstall --no-cache-dir .

does not work.

I tried many different things like manual make in llama.cpp.

There is libggml.so in venv/lib/python3.10/site-packages/lib/libggml.so but not in venv/lib/python3.10/site-packages/llama_cpp/libggml.so and if I copy the file it still doesn't work.

llama-cli -m gemma2......gguf compiles and works fine.

@fat-tire
Copy link

fat-tire commented Jun 28, 2024

Quick-but-not-best solution:

To find where it was actually built, you could try something like:

$ find ~ | grep libggml.so (assuming it's residing somewhere in your home directory)

Once you know where the library is, note the path, then set:

export LD_LIBRARY_PATH=<path_to_library_directory>}:$LD_LIBRARY_PATH

Then run your .py program.

In my case it was hiding at

/home/accountname/.local/lib/python3.10/site-packages/lib

@etemiz
Copy link
Author

etemiz commented Jun 28, 2024

Thank you. That worked well.

@werruww
Copy link

werruww commented Jun 30, 2024

how to run gemma 9b on llama-cpp-python
itis run on ollama and lmstudio but not work on
from llama_cpp import Llama

llm = Llama(
model_path="./gemma-2-9b-it.Q4_K.gguf", # path to GGUF file
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=4, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=0, # The number of layers to offload to GPU, if you have GPU acceleration available. Set to 0 if no GPU acceleration is available on your system.
)

prompt = "write the python code to create text file"

Simple inference example

output = llm(
f"<|user|>\n{prompt}<|end|>\n<|assistant|>",
max_tokens=256, # Generate up to 256 tokens
stop=["<|end|>"],
echo=True, # Whether to echo the prompt
)

print(output['choices'][0]['text'])

@abetlen
Copy link
Owner

abetlen commented Jul 2, 2024

After the recent llama.cpp refactor I had to also update the cmake build a little bit, as of version 0.2.80 the build should work correctly and Gemma2 is supported

@abetlen abetlen closed this as completed Jul 2, 2024
@fat-tire
Copy link

fat-tire commented Jul 2, 2024

Can confirm. It works for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants