-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(feat) add IBM watsonx.ai as an llm provider #3270
(feat) add IBM watsonx.ai as an llm provider #3270
Conversation
…r embedding calls
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Issue: #3204 |
Hey @h0rv, I just checked your PR in instructor and can confirm that this PR would add the support you were looking for. The only thing is that since watsonx.ai doesn't support function-calling natively you would need to add it to the prompt by setting import os
import instructor, litellm
from instructor import Mode
from litellm import completion
from pydantic import BaseModel
litellm.drop_params=True # since watsonx.ai doesn't support `json_mode`
os.environ["WATSONX_URL"] = ""
os.environ["WATSONX_APIKEY"] = ""
class User(BaseModel):
name: str
age: int
client = instructor.from_litellm(completion, project_id="<your-project-id>" mode=Mode.JSON)
resp = client.chat.completions.create(
model="watsonx/meta-llama/llama-3-8b-instruct",
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25 |
cc: @krrishdholakia |
Awesome thanks and good work on this! |
Great PR! Thank you for the work on this @simonsanvil |
just merged. I'll take care of any issues that come up in ci/cd |
This failed a bunch of linting tests
|
Integrated IBM's watsonx.ai API as a provider to be able to make calls to the (text generation and embedding) models available in the watsonx.ai platform.