Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

README: Update with CMake and windows example #748

Merged
merged 2 commits into from
Apr 5, 2023
Merged

README: Update with CMake and windows example #748

merged 2 commits into from
Apr 5, 2023

Conversation

adithyab94
Copy link
Contributor

@adithyab94 adithyab94 commented Apr 3, 2023

Added example to build the project using CMake since lot of people have been enquiring about this

README.md Outdated
@@ -145,6 +145,9 @@ git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

#For Windows and CMake, use the following command instead:
cmake -S . -B build/ -G "MinGW Makefiles"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this only works for mingw. but installing a vs community edition is much simpler for average users.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes indeed, you could add the following lines for the people using vs and cmake:

cd <path_to_llama_folder>
mkdir build
cd build
cmake ..
cmake --build . --config Release

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I have updated it with your suggestion

@adithyab94 adithyab94 requested review from KASR and howard0su April 4, 2023 09:02
@howard0su
Copy link
Collaborator

@ggerganov shall we consider to only use CMake to build the project? Maintaining makefile + cmake it hard. I already noticed some difference between two build system. When we need to pull in external dependency, it will be trickier to do so in Makefile.

@@ -145,6 +145,13 @@ git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

#For Windows and CMake, use the following command instead:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the instruction looks good. can we also mentioned that the user needs to install visual studio?

@ggerganov ggerganov merged commit 594cc95 into ggml-org:master Apr 5, 2023
@ggerganov
Copy link
Member

@howard0su For now we will keep the Makefile since it is the most trivial way to build the project and a lot of developers appreciate the simplicity

@td-us
Copy link

td-us commented Apr 5, 2023

Hi, I tried to follow your updated guidelines to run the model. Unfortunately, I encounter the error "Error: could not load cache" when using command "cmake --build . --config Release". Any idea why please? I am using Windows 10, installed git, cmake, visual studio and anaconda, am using anaconda prompt. cmake .. gives this output :
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants