We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We currently have a workaround to support non-Llama models through the remote vLLM provider but it would be great to support this officially.
For inline vLLM provider, this is work-in-progress: #880
Let's use this issue to discuss any proposals and technical considerations.
TBA
No response
The text was updated successfully, but these errors were encountered:
Somehow GitHub created duplicate issue so closing this one. #965
Sorry, something went wrong.
No branches or pull requests
🚀 Describe the new functionality needed
We currently have a workaround to support non-Llama models through the remote vLLM provider but it would be great to support this officially.
For inline vLLM provider, this is work-in-progress: #880
Let's use this issue to discuss any proposals and technical considerations.
💡 Why is this needed? What if we don't build it?
TBA
Other thoughts
No response
The text was updated successfully, but these errors were encountered: