-
-
Notifications
You must be signed in to change notification settings - Fork 16
Add ollama example to README #56
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -48,7 +48,7 @@ The skill utilizes the `~/.config/mycroft/skills/skill-ovos-fallback-chatgpt.ope | |
{ | ||
"key": "sk-XXXYYYZZZAAABBB123", | ||
"model": "gpt-3.5-turbo", | ||
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor", | ||
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor. You respond in 40 words or fewer.", | ||
"enable_memory": true, | ||
"memory_size": 15, | ||
"__mycroft_skill_firstrun": false | ||
|
@@ -57,11 +57,18 @@ The skill utilizes the `~/.config/mycroft/skills/skill-ovos-fallback-chatgpt.ope | |
|
||
### Configuration for use with Local AI | ||
|
||
You can use a local AI to power this skill instead of Chat GPT. See some of the options in the "See also" section below. | ||
|
||
Hint: If you're running a llama server, you can use `http://localhost:11434/v1` as the api_url. | ||
|
||
Hint: the skill currently requires the key param to be present; put any string into this parameter for local AI | ||
|
||
```json | ||
{ | ||
"api_url": "https://llama.smartgic.io/v1", | ||
"key": "sk-xxx", | ||
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor", | ||
"model": "llama3:latest", | ||
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor. You respond in 40 words or fewer.", | ||
Comment on lines
+66
to
+67
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Codebase verification Model configuration needs to be updated for clarity and accuracy Based on the codebase analysis, I can confirm that the example in the README needs to be updated. The default model is actually "gpt-3.5-turbo" as shown in the DEFAULT_SETTINGS, and the README should provide clearer guidance about model names for different setups:
Suggested update: - "model": "llama3:latest",
+ "model": "llama2", // For Ollama use model names like "llama2" or "mistral". For other local AI servers, refer to their documentation for available models 🔗 Analysis chainClarify model specification for different local AI setups. The example shows For instance:
Consider updating the example to clarify this: - "model": "llama3:latest",
+ "model": "llama2", // For ollama, use model names like "llama2" or "mistral" Let me search for more context about model specifications in the repository. Let me check the specific content of the README.md file to understand the context better. Let me check if there's any additional documentation about model configuration in the linked OVOS technical manual or related files. Let me check the default settings to understand the model configuration better. Let me try one more search with corrected syntax. 🏁 Scripts executedThe following scripts were executed for the analysis: Error: No shell scripts found Script: #!/bin/bash
# Search for model configuration examples in the codebase
rg -l "model.*:.*latest" --type md
# Search for any model configuration examples
rg -l "\"model\":" --type md
# Check for documentation about model configuration
fd -e md -x cat {} | grep -A 5 -B 5 "model.*config"
Length of output: 141 Script: #!/bin/bash
# Get the content around model configuration in README.md
cat README.md | grep -A 10 -B 10 '"model":'
# Search for any documentation about supported models
rg -i "supported.*models?" --type md
rg -i "available.*models?" --type md
Length of output: 1512 Script: #!/bin/bash
# Search for any documentation files about persona server or model configuration
fd -e md -x grep -l -i "persona.*server" {}
# Search for any configuration examples with different model names
rg "model.*:.*\".*\"" --type json
rg "model.*=.*" --type py
Length of output: 610 Script: #!/bin/bash
# Search for DEFAULT_SETTINGS in Python files
rg "DEFAULT_SETTINGS.*=.*{" -A 10 --type py
# Look for any model-related constants or configurations
rg "model.*=.*" --type py --glob "**/__init__.py"
rg "model.*=.*" --type py --glob "**/constants.py"
Length of output: 407 Script: #!/bin/bash
# Search for DEFAULT_SETTINGS with fixed regex
rg "DEFAULT_SETTINGS" -B 2 -A 8 --type py
# Look for the main skill file that might contain default settings
fd -e py -x cat {} | grep -A 10 "DEFAULT_SETTINGS"
Length of output: 5934 |
||
"enable_memory": true, | ||
"memory_size": 15, | ||
"name": "A.I.", | ||
|
@@ -70,6 +77,12 @@ The skill utilizes the `~/.config/mycroft/skills/skill-ovos-fallback-chatgpt.ope | |
} | ||
``` | ||
|
||
## See also | ||
|
||
- [OVOS OpenAI Persona Plugin](https://github.com/OpenVoiceOS/ovos-solver-openai-persona-plugin) - The underlying plugin that powers this skill's integration with various AI models. | ||
- [Ollama OpenAI Compatibility](https://ollama.com/blog/openai-compatibility). | ||
- [Other compatible servers in the OVOS docs](https://openvoiceos.github.io/ovos-technical-manual/202-persona_server/#compatible-projects) | ||
|
||
## Examples | ||
|
||
- "Explain quantum computing in simple terms" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llama.ccp doesn't use this port.