Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: add infera as an inference provide #1860

Merged
merged 4 commits into from
Jan 5, 2025

Conversation

inferanetwork
Copy link
Contributor

@inferanetwork inferanetwork commented Jan 5, 2025

Relates to

N/A

Risks

N/A

Background

What does this PR do?

This PR introduces an additional inference provider for Eliza, enabling agents to be powered by models hosted on Infera. Additionally, Infera is currently free, further making Eliza more accessible to developers.

What kind of change is this?

Feature (non-breaking change which adds new functionality)

Why are we doing this? Any context or related work?

We want to contribute to Eliza project in a way that allows for developers to easily use a versatile inference provider. Currently Infera is free for all users and our goal at Infera is to further lower the barrier for developers to work on AI agents.

Documentation changes needed?

.env.example has been updated with Infera settings.

Testing

Where should a reviewer start?

Reviewer should focus on eliza/packages/core/src/generations.ts , eliza/packages/core/src/models.ts, eliza/packages/core/src/types.ts and eliza/agent/src/index.ts

Detailed testing steps

Option 1:

Add an Infera API key to your .env file

INFERA_API_KEY=               # can be obtained via api.infera.org/docs 
INFERA_MODEL=                # Set as: llama3.2:latest
INFERA_SERVER_URL=      # Set as: https://api.infera.org/

SMALL_INFERA_MODEL=       # Recommend using llama3.2:latest        
MEDIUM_INFERA_MODEL=    # Recommend using mistral-nemo:latest 
LARGE_INFERA_MODEL=       # Recommend using mistral-small:latest

Reviewer can change the modelProvider under eliza/packages/core/defaultCharacter to modelProviderName.INFERA
and then run pnpm build to build and pnpm start to test.

Note: You can mix and match which models you would like to use as long as they are available. To check what models are available, head over to api.infera.org/docs and check out /available_models to see what models are available to be used.

Option 2:

Add an Infera API key to your .env file and select the models you would like to use as done in option 1.

Set the model provider for eliza/characters/trump.character.json as infera and run pnpm run IntegrationTests

Note: it can be possible that a model chosen may note be available but using llama3.2:latest will ensure that you will get a response from the network as it is the base model that infera hosts.

Discord username

white0270_

@inferanetwork inferanetwork marked this pull request as ready for review January 5, 2025 06:53
@lalalune lalalune merged commit 492ca47 into elizaOS:develop Jan 5, 2025
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants