Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Relates to:
(#69)
Risks
OLLAMA Provider parts low risk
medium risk how the dependency ollama-ai-provider was added
llama.ts delegating to LlamaCppService or OllamaService high risk (because it was was crashing nodemon so wip but still a change that might be good to look at)
Background
adds Ollama Model Provider
also code for switching LOCALLLAMA providers (might be good to have for local embeddings use cases & when using local for other tasks but still using other model providers)
Documentation changes needed?
All documentation needed is in the .env file. Did not update the readme. but i can if needed
Testing
run with a OLLAMA_MODEL defined to something & the OLLAMA_EMBEDDING_MODEL set to something