Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue when make libllama.so with LLAMA_CUBLAS=1 #1077

Closed
exppii opened this issue Apr 20, 2023 · 3 comments
Closed

Issue when make libllama.so with LLAMA_CUBLAS=1 #1077

exppii opened this issue Apr 20, 2023 · 3 comments

Comments

@exppii
Copy link

exppii commented Apr 20, 2023

I prepare to use text-generation-webui,and enable CUBLAS; so I try build libllama.so for myself. then I got this error:

[root@A12-213P llama.cpp]# LLAMA_CUBLAS=1 make libllama.so
I llama.cpp build info: 
I UNAME_S:  Linux
I UNAME_P:  x86_64
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_CUBLAS -I/usr/local/cuda/include
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native
I LDFLAGS:  -lcublas_static -lculibos -lcudart_static -lcublasLt_static -lpthread -ldl -L/usr/local/cuda/lib64 -lrt
I CC:       cc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9)
I CXX:      g++ (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9)

g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -shared -fPIC -o libllama.so llama.o ggml.o ggml-cuda.o -lcublas_static -lculibos -lcudart_static -lcublasLt_static -lpthread -ldl -L/usr/local/cuda/lib64
/opt/rh/devtoolset-11/root/usr/libexec/gcc/x86_64-redhat-linux/11/ld: ggml-cuda.o: relocation R_X86_64_32 against `.bss' can not be used when making a shared object; recompile with -fPIC
collect2: error: ld returned 1 exit status
make: *** [Makefile:184:libllama.so] 错误 1
[root@A12-213P llama.cpp]# 


@slaren
Copy link
Collaborator

slaren commented Apr 20, 2023

does it help if you add -fPIC to the nvcc line in the Makefile?

llama.cpp/Makefile

Lines 107 to 108 in 5addcb1

ggml-cuda.o: ggml-cuda.cu ggml-cuda.h
nvcc -arch=native -c -o $@ $<

@exppii
Copy link
Author

exppii commented Apr 21, 2023

THX,
After add --compiler-options -fPIC at line 108 in the Makefile resolves the problem.

ifdef LLAMA_CUBLAS
        CFLAGS  += -DGGML_USE_CUBLAS -I/usr/local/cuda/include
        LDFLAGS += -lcublas_static -lculibos -lcudart_static -lcublasLt_static -lpthread -ldl -L/usr/local/cuda/lib64 -lrt
        OBJS    += ggml-cuda.o
ggml-cuda.o: ggml-cuda.cu ggml-cuda.h
        nvcc -arch=native --compiler-options -fPIC -c -o $@ $<

@exppii exppii closed this as completed Apr 21, 2023
@slaren
Copy link
Collaborator

slaren commented Apr 21, 2023

This will be fixed in #1094

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants