Skip to content
This repository has been archived by the owner on Jan 31, 2025. It is now read-only.

Add ollama example to README #56

Merged
merged 4 commits into from
Jan 5, 2025
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 15 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The skill utilizes the `~/.config/mycroft/skills/skill-ovos-fallback-chatgpt.ope
{
"key": "sk-XXXYYYZZZAAABBB123",
"model": "gpt-3.5-turbo",
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor",
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor. You respond in 40 words or fewer.",
"enable_memory": true,
"memory_size": 15,
"__mycroft_skill_firstrun": false
Expand All @@ -57,11 +57,18 @@ The skill utilizes the `~/.config/mycroft/skills/skill-ovos-fallback-chatgpt.ope

### Configuration for use with Local AI

You can use a local AI to power this skill instead of Chat GPT. See some of the options in the "See also" section below.

Hint: If you're running a llama server, you can use `http://localhost:11434/v1` as the api_url.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

llama.ccp doesn't use this port.


Hint: the skill currently requires the key param to be present; put any string into this parameter for local AI

```json
{
"api_url": "https://llama.smartgic.io/v1",
"key": "sk-xxx",
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor",
"model": "llama3:latest",
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor. You respond in 40 words or fewer.",
Comment on lines +66 to +67
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Model configuration needs to be updated for clarity and accuracy

Based on the codebase analysis, I can confirm that the example in the README needs to be updated. The default model is actually "gpt-3.5-turbo" as shown in the DEFAULT_SETTINGS, and the README should provide clearer guidance about model names for different setups:

  • The OpenAI section correctly shows "gpt-3.5-turbo"
  • The Local AI section shows "llama3:latest" which is not a standard model name
  • The README already mentions Ollama compatibility but doesn't align the example with the hint

Suggested update:

-  "model": "llama3:latest",
+  "model": "llama2", // For Ollama use model names like "llama2" or "mistral". For other local AI servers, refer to their documentation for available models
🔗 Analysis chain

Clarify model specification for different local AI setups.

The example shows llama3:latest as the model, but this might not be the correct model name for all local AI setups. Consider adding a note about model name variations or providing multiple examples.

For instance:

  • Ollama uses names like llama2 or mistral
  • Other local AI servers might use different model identifiers

Consider updating the example to clarify this:

-  "model": "llama3:latest",
+  "model": "llama2", // For ollama, use model names like "llama2" or "mistral"

Let me search for more context about model specifications in the repository.


Let me check the specific content of the README.md file to understand the context better.


Let me check if there's any additional documentation about model configuration in the linked OVOS technical manual or related files.


Let me check the default settings to understand the model configuration better.


Let me try one more search with corrected syntax.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for model configuration examples in the codebase
rg -l "model.*:.*latest" --type md

# Search for any model configuration examples
rg -l "\"model\":" --type md

# Check for documentation about model configuration
fd -e md -x cat {} | grep -A 5 -B 5 "model.*config"

Length of output: 141


Script:

#!/bin/bash
# Get the content around model configuration in README.md
cat README.md | grep -A 10 -B 10 '"model":'

# Search for any documentation about supported models
rg -i "supported.*models?" --type md
rg -i "available.*models?" --type md

Length of output: 1512


Script:

#!/bin/bash
# Search for any documentation files about persona server or model configuration
fd -e md -x grep -l -i "persona.*server" {}

# Search for any configuration examples with different model names
rg "model.*:.*\".*\"" --type json
rg "model.*=.*" --type py

Length of output: 610


Script:

#!/bin/bash
# Search for DEFAULT_SETTINGS in Python files
rg "DEFAULT_SETTINGS.*=.*{" -A 10 --type py

# Look for any model-related constants or configurations
rg "model.*=.*" --type py --glob "**/__init__.py"
rg "model.*=.*" --type py --glob "**/constants.py"

Length of output: 407


Script:

#!/bin/bash
# Search for DEFAULT_SETTINGS with fixed regex
rg "DEFAULT_SETTINGS" -B 2 -A 8 --type py

# Look for the main skill file that might contain default settings
fd -e py -x cat {} | grep -A 10 "DEFAULT_SETTINGS"

Length of output: 5934

"enable_memory": true,
"memory_size": 15,
"name": "A.I.",
Expand All @@ -70,6 +77,12 @@ The skill utilizes the `~/.config/mycroft/skills/skill-ovos-fallback-chatgpt.ope
}
```

## See also

- [OVOS OpenAI Persona Plugin](https://github.com/OpenVoiceOS/ovos-solver-openai-persona-plugin) - The underlying plugin that powers this skill's integration with various AI models.
- [Ollama OpenAI Compatibility](https://ollama.com/blog/openai-compatibility).
- [Other compatible servers in the OVOS docs](https://openvoiceos.github.io/ovos-technical-manual/202-persona_server/#compatible-projects)

## Examples

- "Explain quantum computing in simple terms"
Expand Down