-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAi compatible APi endpoint #1
Comments
Hey, I looked into this...because I am using the official python lib for my local calls...this should be supported out of the box. Just update the .env file to point to the http://[IP]:[PORT] and update the model you want to use. Should all work; I will be doing a test later to close this out...but you should have no issue as long as it is openAI compatible |
i'll have a look and let you know. |
So here my .env :
At launch the LM Studio server print this :
And this if i issue a request with civitai-cli-manager and the .env above :
i think we would need to have a specific configuration for openAi compatible APi like textgeneration-webui or LM Studio |
@ali0une |
Hi there.
Very nice project!
Would it be possible to use an OpenAi compatible APi endpoint with a local LLM through LM Studio or text-generation-webui or llama.cpp?
i see llama.cpp has API Endpoints and even OAI-like API but i've never used them.
But i use LM Studio in server mode and it can expose an Open Api compatible endpoint like
http://127.0.0.1:5000/v1
i'm pretty sure it's the same for text-generation-webui.
The text was updated successfully, but these errors were encountered: