Note: The
llmclient
repository is now deprecated. We have migrated all development and maintenance to a new package,lmi
, which is part of theldp
repository.LMI Package: https://github.com/Future-House/ldp/tree/main/lmi
LDP Repository: https://github.com/Future-House/ldp
A Python library for interacting with Large Language Models (LLMs) through an unified interface.
pip install fh-llm-client
A simple example of how to use the library with default settings is shown below.
from llmclient import LiteLLMModel
from aviary import Message
llm = LiteLLMModel()
messages = [
Message(content="What is the meaning of life?"),
]
completion = await llm.call_single(messages)
assert completion.text == "42"
An LLM is a class that inherits from LLMModel
and implements the following methods:
async acompletion(messages: list[Message], **kwargs) -> list[LLMResult]
async acompletion_iter(messages: list[Message], **kwargs) -> AsyncIterator[LLMResult]
These methods are used by the base class LLMModel
to implement the LLM interface.
Because LLMModel
is an abstract class, it doesn't depend on any specific LLM provider. All the connection with the provider is done in the subclasses using acompletion
and acompletion_iter
as interfaces.
Because these are the only methods that communicate with the chosen LLM provider, we use an abstraction LLMResult to hold the results of the LLM call.
An LLMModel
implements call
, which receives a list of aviary.Message
s and returns a list of LLMResult
s. LLMModel.call
can receive callbacks, tools, and output schemas to control its behavior, as better explained below.
Adittionally, LLMModel.call_single
can be used to return a single LLMResult
completion.
LiteLLMModel
wraps LiteLLM
API usage within our LLMModel
interface. It receives a name
parameter, which is the name of the model to use and a config
parameter, which is a dictionary of configuration options for the model following the LiteLLM configuration schema. Common parameters such as temperature
, max_token
, and n
(the number of completions to return) can be passed as part of the config
dictionary.
from llmclient import LiteLLMModel
config = {
"model_list": [
{
"model_name": "gpt-4o",
"litellm_params": {
"api_key": os.getenv("OPENAI_API_KEY"),
"frequency_penalty": 1.5,
"top_p": 0.9,
"max_tokens": 512,
"temperature": 0.1,
"n": 5,
},
}
]
}
llm = LiteLLMModel(name="gpt-4o", config=config)
config
can also be used to pass common parameters directly for the model.
config = {
{
"name": "gpt-4o",
"temperature": 0.1,
"max_tokens": 512,
"n": 5,
}
}
llm = LiteLLMModel(config=config)
Cost tracking is supported in two different ways:
- Calls to the LLM returns the token usage for each call in
LLMResult.prompt_count
andLLMResult.completion_count
. Additionally,LLMResult.cost
can be used to get a cost estimate for the call in USD. - A global cost tracker is maintained in
GLOBAL_COST_TRACKER
and can be enabled or disabled usingenable_cost_tracking()
andcost_tracking_ctx()
.
Rate limiting helps control the rate of requests made to various services and LLMs. The rate limiter supports both in-memory and Redis-based storage for cross-process rate limiting.
Rate limits can be configured in two ways:
- Through the LLM configuration:
from llmclient import LiteLLMModel
config = {
"rate_limit": {
"gpt-4": "100/minute", # 100 tokens per minute
}
}
llm = LiteLLMModel(name="gpt-4", config=config)
- Through the global rate limiter configuration:
from llmclient.rate_limiter import GLOBAL_LIMITER
GLOBAL_LIMITER.rate_config[("client", "gpt-4")] = "100/minute"
Rate limits can be specified in two formats:
-
As a string:
"<count> [per|/] [n (optional)] <second|minute|hour|day|month|year>"
"100/minute" # 100 requests per minute "5 per second" # 5 requests per second "1000/day" # 1000 requests per day
-
Using RateLimitItem classes:
from limits import RateLimitItemPerSecond, RateLimitItemPerMinute RateLimitItemPerSecond(30, 1) # 30 requests per second RateLimitItemPerMinute(1000, 1) # 1000 requests per minute
The rate limiter supports two storage backends:
- In-memory storage (default when Redis is not configured):
from llmclient.rate_limiter import GlobalRateLimiter
limiter = GlobalRateLimiter(use_in_memory=True)
- Redis storage (for cross-process rate limiting):
# Set REDIS_URL environment variable
import os
os.environ["REDIS_URL"] = "localhost:6379"
from llmclient.rate_limiter import GlobalRateLimiter
limiter = GlobalRateLimiter() # Will automatically use Redis if REDIS_URL is set
You can monitor current rate limit status:
from llmclient.rate_limiter import GLOBAL_LIMITER
status = await GLOBAL_LIMITER.rate_limit_status()
# Example output:
{
("client", "gpt-4"): {
"period_start": 1234567890,
"n_items_in_period": 50,
"period_seconds": 60,
"period_name": "minute",
"period_cap": 100,
}
}
The default timeout for rate limiting is 60 seconds, but can be configured:
import os
os.environ["RATE_LIMITER_TIMEOUT"] = "30" # 30 seconds timeout
Rate limits can account for different weights (e.g., token counts for LLM requests):
await GLOBAL_LIMITER.try_acquire(
("client", "gpt-4"),
weight=token_count, # Number of tokens in the request
acquire_timeout=30.0, # Optional timeout override
)
This client also includes embedding models. An embedding model is a class that inherits from EmbeddingModel
and implements the embed_documents
method, which receives a list of strings and returns a list with a list of floats (the embeddings) for each string.
Currently, the following embedding models are supported:
LiteLLMEmbeddingModel
SparseEmbeddingModel
SentenceTransformerEmbeddingModel
HybridEmbeddingModel
LiteLLMEmbeddingModel
provides a wrapper around LiteLLM's embedding functionality. It supports various embedding models through the LiteLLM interface, with automatic dimension inference and token limit handling. It defaults to text-embedding-3-small
and can be configured with a name
, batch_size
, and config
parameters.
Notice that LiteLLMEmbeddingModel
can also be rate limited.
from llmclient import LiteLLMEmbeddingModel
model = LiteLLMEmbeddingModel()
model = LiteLLMEmbeddingModel(
name="text-embedding-ada-002",
batch_size=16,
config={
"kwargs": {
"api_key": "your-api-key",
},
"rate_limit": "100/minute",
},
)
embeddings = await model.embed_documents(["text1", "text2", "text3"])
HybridEmbeddingModel
combines multiple embedding models by concatenating their outputs. It is typically used to combine a dense embedding model (like LiteLLMEmbeddingModel
) with a sparse embedding model for improved performance. The model can be created in two ways:
from llmclient import LiteLLMEmbeddingModel, SparseEmbeddingModel, HybridEmbeddingModel
dense_model = LiteLLMEmbeddingModel(name="text-embedding-3-small")
sparse_model = SparseEmbeddingModel()
hybrid_model = HybridEmbeddingModel(models=[dense_model, sparse_model])
The resulting embedding dimension will be the sum of the dimensions of all component models. For example, if you combine a 1536-dimensional dense embedding with a 256-dimensional sparse embedding, the final embedding will be 1792-dimensional.
You can also use sentence-transformer
, which is a local embedding library with support for HuggingFace models, by installing fh-llm-client[local]
.