Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama.cpp embeddor failure #60

Open
1 of 2 tasks
CHesketh76 opened this issue Jan 11, 2025 · 0 comments
Open
1 of 2 tasks

Llama.cpp embeddor failure #60

CHesketh76 opened this issue Jan 11, 2025 · 0 comments

Comments

@CHesketh76
Copy link

Describe the bug
When using the embedding feature when note are linked (i.e. [[my_other_notes]]]) with the llama.cpp's server (i.e. 'http://localhost:8080) an error message pops up for a split second that 'docs failed' then the context within my notes is not used.

However, this issue does not appear with the LMStudio application when a small embeddor is used instead of the larger LLM embeddor (which llama.cpp uses by default).

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Start LLama.cpp server with 22B or larger model'
  2. Click on 'Ask a question in the notes and have the answer in another notebook, link using [[the_answer]]'
  3. Scroll down to 'Run local-got command'
  4. See error

Expected behavior
Error message and incorrect output will appear.

Screenshots or videos
If applicable, add screenshots or videos to help explain your problem.

How did you verify that the plugin is the problem?
Ran a curl command for the /embedding endpoint and a vector is returned. Meaning the local-gpt is having an issue with handling large embeddings (a.k.a >4k in dimensions)

Desktop (please complete the following information):

  • Desktop
  • Mobile

Additional context

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant