You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When using the embedding feature when note are linked (i.e. [[my_other_notes]]]) with the llama.cpp's server (i.e. 'http://localhost:8080) an error message pops up for a split second that 'docs failed' then the context within my notes is not used.
However, this issue does not appear with the LMStudio application when a small embeddor is used instead of the larger LLM embeddor (which llama.cpp uses by default).
To Reproduce
Steps to reproduce the behavior:
Go to 'Start LLama.cpp server with 22B or larger model'
Click on 'Ask a question in the notes and have the answer in another notebook, link using [[the_answer]]'
Scroll down to 'Run local-got command'
See error
Expected behavior
Error message and incorrect output will appear.
Screenshots or videos
If applicable, add screenshots or videos to help explain your problem.
How did you verify that the plugin is the problem?
Ran a curl command for the /embedding endpoint and a vector is returned. Meaning the local-gpt is having an issue with handling large embeddings (a.k.a >4k in dimensions)
Desktop (please complete the following information):
Desktop
Mobile
Additional context
The text was updated successfully, but these errors were encountered:
Describe the bug
When using the embedding feature when note are linked (i.e. [[my_other_notes]]]) with the llama.cpp's server (i.e. 'http://localhost:8080) an error message pops up for a split second that 'docs failed' then the context within my notes is not used.
However, this issue does not appear with the LMStudio application when a small embeddor is used instead of the larger LLM embeddor (which llama.cpp uses by default).
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Error message and incorrect output will appear.
Screenshots or videos
If applicable, add screenshots or videos to help explain your problem.
How did you verify that the plugin is the problem?
Ran a curl command for the
/embedding
endpoint and a vector is returned. Meaning thelocal-gpt
is having an issue with handling large embeddings (a.k.a >4k in dimensions)Desktop (please complete the following information):
Additional context
The text was updated successfully, but these errors were encountered: