Feat: add infera as an inference provide #1860
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Relates to
N/A
Risks
N/A
Background
What does this PR do?
This PR introduces an additional inference provider for Eliza, enabling agents to be powered by models hosted on Infera. Additionally, Infera is currently free, further making Eliza more accessible to developers.
What kind of change is this?
Feature (non-breaking change which adds new functionality)
Why are we doing this? Any context or related work?
We want to contribute to Eliza project in a way that allows for developers to easily use a versatile inference provider. Currently Infera is free for all users and our goal at Infera is to further lower the barrier for developers to work on AI agents.
Documentation changes needed?
.env.example
has been updated with Infera settings.Testing
Where should a reviewer start?
Reviewer should focus on eliza/packages/core/src/generations.ts , eliza/packages/core/src/models.ts, eliza/packages/core/src/types.ts and eliza/agent/src/index.ts
Detailed testing steps
Option 1:
Add an Infera API key to your .env file
Reviewer can change the modelProvider under eliza/packages/core/defaultCharacter to modelProviderName.INFERA
and then run pnpm build to build and pnpm start to test.
Note: You can mix and match which models you would like to use as long as they are available. To check what models are available, head over to api.infera.org/docs and check out
/available_models
to see what models are available to be used.Option 2:
Add an Infera API key to your
.env
file and select the models you would like to use as done in option 1.Set the model provider for
eliza/characters/trump.character.json
asinfera
and runpnpm run IntegrationTests
Note: it can be possible that a model chosen may note be available but using llama3.2:latest will ensure that you will get a response from the network as it is the base model that infera hosts.
Discord username
white0270_