Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation with cuBLAS fails #454

Closed
IvanDelvert opened this issue Jul 7, 2023 · 1 comment
Closed

Installation with cuBLAS fails #454

IvanDelvert opened this issue Jul 7, 2023 · 1 comment

Comments

@IvanDelvert
Copy link

Hi,

I'm trying to install llama-cpp-python with cuBLAS on WSL2. I have the following error with the command CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python

Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [110 lines of output]
      
      
      --------------------------------------------------------------------------------
      -- Trying 'Ninja' generator
      --------------------------------
      ---------------------------
      ----------------------
      -----------------
      ------------
      -------
      --
      Not searching for unused variables given on the command line.
      -- The C compiler identification is GNU 9.4.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- The CXX compiler identification is GNU 9.4.0
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Configuring done (1.4s)
      -- Generating done (0.0s)
      -- Build files have been written to: /tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/_cmake_test_compile/build
      --
      -------
      ------------
      -----------------
      ----------------------
      ---------------------------
      --------------------------------
      -- Trying 'Ninja' generator - success
      --------------------------------------------------------------------------------
      
      Configuring Project
        Working directory:
          /tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/_skbuild/linux-x86_64-3.9/cmake-build
        Command:
          /tmp/pip-build-env-xbi0jevf/overlay/lib/python3.9/site-packages/cmake/data/bin/cmake /tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709 -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-xbi0jevf/overlay/lib/python3.9/site-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/_skbuild/linux-x86_64-3.9/cmake-install -DPYTHON_VERSION_STRING:STRING=3.9.16 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-xbi0jevf/overlay/lib/python3.9/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/home/ivan/.pyenv/versions/3.9.16/bin/python -DPYTHON_INCLUDE_DIR:PATH=/home/ivan/.pyenv/versions/3.9.16/include/python3.9 -DPYTHON_LIBRARY:PATH=/home/ivan/.pyenv/versions/3.9.16/lib/libpython3.9.so -DPython_EXECUTABLE:PATH=/home/ivan/.pyenv/versions/3.9.16/bin/python -DPython_ROOT_DIR:PATH=/home/ivan/.pyenv/versions/3.9.16 -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/home/ivan/.pyenv/versions/3.9.16/include/python3.9 -DPython3_EXECUTABLE:PATH=/home/ivan/.pyenv/versions/3.9.16/bin/python -DPython3_ROOT_DIR:PATH=/home/ivan/.pyenv/versions/3.9.16 -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/home/ivan/.pyenv/versions/3.9.16/include/python3.9 -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-xbi0jevf/overlay/lib/python3.9/site-packages/ninja/data/bin/ninja -DLLAMA_CUBLAS=on -DCMAKE_BUILD_TYPE:STRING=Release -DLLAMA_CUBLAS=on
      
      Not searching for unused variables given on the command line.
      -- The C compiler identification is GNU 9.4.0
      -- The CXX compiler identification is GNU 9.4.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.25.1")
      fatal: not a git repository (or any of the parent directories): .git
      fatal: not a git repository (or any of the parent directories): .git
      CMake Warning at vendor/llama.cpp/CMakeLists.txt:114 (message):
        Git repository not found; to enable automatic generation of build info,
        make sure Git is installed and the project is a Git repository.
      
      
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
      -- Check if compiler accepts -pthread
      -- Check if compiler accepts -pthread - yes
      -- Found Threads: TRUE
      -- Found CUDAToolkit: /usr/include (found version "10.1.243")
      -- cuBLAS found
      -- The CUDA compiler identification is NVIDIA 10.1.243
      -- Detecting CUDA compiler ABI info
      -- Detecting CUDA compiler ABI info - done
      -- Check for working CUDA compiler: /usr/bin/nvcc - skipped
      -- Detecting CUDA compile features
      -- Detecting CUDA compile features - done
      -- Using CUDA architectures: 52
      -- CMAKE_SYSTEM_PROCESSOR: x86_64
      -- x86 detected
      -- Configuring done (5.8s)
      -- Generating done (0.0s)
      -- Build files have been written to: /tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/_skbuild/linux-x86_64-3.9/cmake-build
      [1/8] Building CUDA object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      FAILED: vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o
      /usr/bin/nvcc  -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_DMMV_Y=1 -DGGML_USE_CUBLAS -DGGML_USE_K_QUANTS -DK_QUANTS_PER_ITERATION=2 -I/tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/vendor/llama.cpp/. -O3 -DNDEBUG -std=c++11 --generate-code=arch=compute_52,code=[compute_52,sm_52] -Xcompiler=-fPIC -mf16c -mfma -mavx -mavx2 -Xcompiler -pthread -x cu -c /tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/vendor/llama.cpp/ggml-cuda.cu -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o && /usr/bin/nvcc  -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_DMMV_Y=1 -DGGML_USE_CUBLAS -DGGML_USE_K_QUANTS -DK_QUANTS_PER_ITERATION=2 -I/tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/vendor/llama.cpp/. -O3 -DNDEBUG -std=c++11 --generate-code=arch=compute_52,code=[compute_52,sm_52] -Xcompiler=-fPIC -mf16c -mfma -mavx -mavx2 -Xcompiler -pthread -x cu -M /tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/vendor/llama.cpp/ggml-cuda.cu -MT vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o -o vendor/llama.cpp/CMakeFiles/ggml.dir/ggml-cuda.cu.o.d
      nvcc fatal   : 'f16c': expected a number
      [2/8] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/k_quants.c.o
      [3/8] Building CXX object vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o
      [4/8] Building C object vendor/llama.cpp/CMakeFiles/ggml.dir/ggml.c.o
      ninja: build stopped: subcommand failed.
      Traceback (most recent call last):
        File "/tmp/pip-build-env-xbi0jevf/overlay/lib/python3.9/site-packages/skbuild/setuptools_wrap.py", line 674, in setup
          cmkr.make(make_args, install_target=cmake_install_target, env=env)
        File "/tmp/pip-build-env-xbi0jevf/overlay/lib/python3.9/site-packages/skbuild/cmaker.py", line 697, in make
          self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env)
        File "/tmp/pip-build-env-xbi0jevf/overlay/lib/python3.9/site-packages/skbuild/cmaker.py", line 742, in make_impl
          raise SKBuildError(msg)
      
      An error occurred while building with CMake.
        Command:
          /tmp/pip-build-env-xbi0jevf/overlay/lib/python3.9/site-packages/cmake/data/bin/cmake --build . --target install --config Release --
        Install target:
          install
        Source directory:
          /tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709
        Working directory:
          /tmp/pip-install-k8o7n7o2/llama-cpp-python_84acdef131d14e86b92360eb52309709/_skbuild/linux-x86_64-3.9/cmake-build
      Please check the install target is valid and see CMake's output for more information.
      
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects`

Any ideas? :)

Environment and Context

Linux FRXPS15-1TFQDB3 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

  • SDK version, e.g. for Linux:
Python 3.9.16

GNU Make 4.2.1
Built for x86_64-pc-linux-gnu

$ g++ (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
@IvanDelvert
Copy link
Author

I found out. It was somethin wrong with my Cuda PATH plus an older version of cuda was on my system. Fix thanks to this comment NVlabs/instant-ngp#747 (comment).

antoine-lizee pushed a commit to antoine-lizee/llama-cpp-python that referenced this issue Oct 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant