Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SYCL] Revisited & updated SYCL build documentation #6141

Merged
merged 9 commits into from
Mar 28, 2024

Conversation

OuadiElfarouki
Copy link
Contributor

Updated the README-sycl.md, fixed typos and added more clarifications for the different build targets (intel GPUs & Nvidia GPUs so far).
Please let me know what you think @NeoZhangJianyu @airMeng @AidanBeltonS @abhilash1910

@OuadiElfarouki OuadiElfarouki changed the title Revisited & updated SYCL build documentation [SYCL] Revisited & updated SYCL build documentation Mar 18, 2024
README-sycl.md Outdated Show resolved Hide resolved
README-sycl.md Outdated Show resolved Hide resolved
README-sycl.md Outdated Show resolved Hide resolved
README-sycl.md Outdated Show resolved Hide resolved
README-sycl.md Outdated Show resolved Hide resolved
Copy link
Contributor

@AidanBeltonS AidanBeltonS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With changes it LGTM!

Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice job. Thanks!

@OuadiElfarouki
Copy link
Contributor Author

Branch has been rebased, some conflicts with #6151 have been addressed locally.

@NeoZhangJianyu
Copy link
Collaborator

@OuadiElfarouki
I see there are two parts in this PR:

  1. support NV GPU.
    comments:
  • add links as possible, like check/install CUDA/driver, oneMKL build with CUDA.
  • make the llama.cpp guide compactness and clear.
  1. refactor legacy description.
    comments: avoid change more (that would impact existed user to read it again); don't change the style; keep guide compactness.

Because some my comments are unavailable due to the line number is changed.
I will review again for your next version. So don't need answer my comments above.

@abhilash1910 abhilash1910 self-requested a review March 22, 2024 07:41
@AidanBeltonS AidanBeltonS merged commit 5106ef4 into ggerganov:master Mar 28, 2024
@NeoZhangJianyu
Copy link
Collaborator

@AidanBeltonS
I have created more comments, but the most of my comments are not handled.
As the original author of this document, I suggest to revert the PR before handle the comments.

hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
* Revisited & updated SYCL build documentation

* removed outdated comment

* Addressed PR comments

* Trimed white spaces

* added new end line
hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 3, 2024
* Revisited & updated SYCL build documentation

* removed outdated comment

* Addressed PR comments

* Trimed white spaces

* added new end line
tybalex pushed a commit to rubra-ai/tools.cpp that referenced this pull request Apr 17, 2024
* Revisited & updated SYCL build documentation

* removed outdated comment

* Addressed PR comments

* Trimed white spaces

* added new end line
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants