-
-
Notifications
You must be signed in to change notification settings - Fork 16
Conversation
WalkthroughThe pull request introduces updates to the Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
README.md (4)
Line range hint
58-72
: Consider clarifying local vs remote llama.cpp setupThe example uses a specific remote URL (
llama.smartgic.io
). Consider adding a note about how to configure for a locally running llama.cpp instance, as this would be more common for self-hosted setups.### Configuration for use with llama.cpp + +For a local llama.cpp instance, use your local server URL (e.g., `http://localhost:8080/v1`). +For a remote instance like the example below, replace the URL accordingly: ```json {
75-75
: Format the URL as a markdown linkConvert the bare URL to a proper markdown link for better readability.
-See https://ollama.com/blog/openai-compatibility +See [Ollama OpenAI Compatibility](https://ollama.com/blog/openai-compatibility)🧰 Tools
🪛 Markdownlint (0.37.0)
75-75: null
Bare URL used(MD034, no-bare-urls)
73-88
: Add a brief introduction to ollamaConsider adding a short description of what ollama is and its benefits before jumping into the configuration. This would help users understand why they might want to use it.
### Configuration for use with a locally-running instance of ollama +[Ollama](https://ollama.ai/) lets you run large language models locally. It's a great option for users who want to maintain privacy or run AI models without relying on cloud services. + See [Ollama OpenAI Compatibility](https://ollama.com/blog/openai-compatibility)🧰 Tools
🪛 LanguageTool
[uncategorized] ~73-~73: Although a hyphen is possible, it is not necessary in a compound modifier in which the first word is an adverb that ends in ‘ly’.
Context: ...} ``` ### Configuration for use with a locally-running instance of ollama See https://ollama....(HYPHENATED_LY_ADVERB_ADJECTIVE)
🪛 Markdownlint (0.37.0)
75-75: null
Bare URL used(MD034, no-bare-urls)
90-92
: Enhance the See also sectionThe section could be more informative with proper markdown formatting and a brief description of the linked project.
## See also -https://github.com/OpenVoiceOS/ovos-solver-openai-persona-plugin/tree/dev +- [OVOS OpenAI Persona Plugin](https://github.com/OpenVoiceOS/ovos-solver-openai-persona-plugin/tree/dev) - The underlying plugin that powers this skill's integration with various AI models.🧰 Tools
🪛 Markdownlint (0.37.0)
92-92: null
Bare URL used(MD034, no-bare-urls)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md
(2 hunks)
🧰 Additional context used
🪛 LanguageTool
README.md
[uncategorized] ~73-~73: Although a hyphen is possible, it is not necessary in a compound modifier in which the first word is an adverb that ends in ‘ly’.
Context: ...} ``` ### Configuration for use with a locally-running instance of ollama See https://ollama....
(HYPHENATED_LY_ADVERB_ADJECTIVE)
🪛 Markdownlint (0.37.0)
README.md
75-75: null
Bare URL used
(MD034, no-bare-urls)
92-92: null
Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (1)
README.md (1)
Line range hint
58-93
: Overall documentation improvements look great!The additions provide clear guidance for users wanting to use ollama, and the reorganization of the llama.cpp section makes the different configuration options more distinct. The examples are complete and include all necessary configuration parameters.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~73-~73: Although a hyphen is possible, it is not necessary in a compound modifier in which the first word is an adverb that ends in ‘ly’.
Context: ...} ``` ### Configuration for use with a locally-running instance of ollama See https://ollama....(HYPHENATED_LY_ADVERB_ADJECTIVE)
🪛 Markdownlint (0.37.0)
75-75: null
Bare URL used(MD034, no-bare-urls)
92-92: null
Bare URL used(MD034, no-bare-urls)
the existing example is for any OpenAI compatible api, not specifically llama.cpp , the ollama example you added is the exact same thing just with url changed? i am ok with improving the clarity of existing example, but dont think adding another one the way this PR does is helping much in regards to clarity there is a non-exhaustive list of compatible projects here https://openvoiceos.github.io/ovos-technical-manual/202-persona_server/#compatible-projects |
OK, thanks for the feedback. I will merge the two back in and add some notes at the top about how to set it up with ollama |
By the way - does the existing example in the README work, since it will likely default to |
I just spent a while trying to understand how to use a locally-running ollama instance, so adding some documentation to the README that would have saved me some time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 1
🧹 Nitpick comments (1)
README.md (1)
58-58
: Consider using a more generic section titleBased on the discussion in the PR, this example works with any OpenAI-compatible API. Consider using a more inclusive title like "Configuration for use with Local OpenAI-compatible APIs (e.g., llama.cpp)" to avoid confusion.
🛑 Comments failed to post (1)
README.md (1)
73-89: 🛠️ Refactor suggestion
Consider consolidating the configuration examples
As discussed in the PR, since both llama.cpp and ollama use OpenAI-compatible APIs, consider merging these examples to reduce redundancy. You could:
- Create a single "Local AI Configuration" section
- Add setup instructions at the top
- Use a table or list to show the different
api_url
andmodel
values for each implementationHere's a suggested structure:
### Configuration for Local AI (OpenAI-compatible APIs) You can use this skill with various local AI implementations that provide OpenAI-compatible APIs: #### Setup Instructions - For ollama: Install and run ollama locally (see [Ollama OpenAI Compatibility](https://ollama.com/blog/openai-compatibility)) - For llama.cpp: Use a hosted instance or set up locally #### Configuration Example ```json { "api_url": "http://localhost:11434/v1", # For ollama # OR "api_url": "https://llama.smartgic.io/v1", # For llama.cpp "model": "llama3:latest", # For ollama # OR "model": "gpt-3.5-turbo", # For other implementations "key": "sk-xxx", "persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor. You respond in 40 words or fewer.", "enable_memory": true, "memory_size": 15, "name": "AI Assistant", "confirmation": false }<details> <summary>🧰 Tools</summary> <details> <summary>🪛 LanguageTool</summary> [uncategorized] ~73-~73: Although a hyphen is possible, it is not necessary in a compound modifier in which the first word is an adverb that ends in ‘ly’. Context: ...} ``` ### Configuration for use with a locally-running instance of ollama See [Ollama OpenAI ... (HYPHENATED_LY_ADVERB_ADJECTIVE) </details> </details> </blockquote></details> </blockquote></details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
README.md (2)
60-63
: Consider making the Local AI section more generic.While the ollama-specific information is helpful, consider restructuring this section to be more inclusive of different local AI options. This aligns with JarbasAl's feedback about the OpenAI-compatible API approach.
Consider this revision:
-You can use a llama-based local AI to power this skill. Make sure you update the "model" param and put any arbitrary key into the key param. - -If you're running a local ollama or llama.cpp instance, you can use `http://localhost:11434/v1` as the api_url. +This skill supports any OpenAI-compatible API server. Common options include: +- Local instances (e.g., ollama at `http://localhost:11434/v1`) +- Self-hosted servers (e.g., llama.cpp) +- Compatible cloud services + +Make sure to update the "model" parameter according to your server's available models. The "key" parameter is required but can be any arbitrary value for local instances.
68-69
: Make the model example more generic or provide multiple options.The current model name "llama3:latest" is ollama-specific. Consider providing a more generic example or multiple examples to cover different local AI implementations.
Consider this addition:
- "model": "llama3:latest", + "model": "llama2", // Generic example + // Common model names: + // - ollama: "llama2:latest" + // - llama.cpp: "llama-2-7b-chat.gguf" + // - Other OpenAI-compatible servers may use different model names
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md
(3 hunks)
🔇 Additional comments (2)
README.md (2)
51-51
: Great addition of response length constraint!Adding "respond in 40 words or fewer" to the persona is excellent for voice interactions, as it encourages concise and user-friendly responses.
78-83
: Excellent addition of the "See also" section!The links provide valuable resources for users to learn more about the underlying plugin, Ollama compatibility, and other server options. This helps users make informed decisions about their setup.
makes sense, in my latest revision i've explicitly added a model param to the local ai example so others don't make my mistake :) since the solver defaults to the gpt model that won't exist on llama instances |
Please remove the |
|
README.md
Outdated
@@ -57,11 +57,18 @@ The skill utilizes the `~/.config/mycroft/skills/skill-ovos-fallback-chatgpt.ope | |||
|
|||
### Configuration for use with Local AI | |||
|
|||
You can use a local AI to power this skill instead of Chat GPT. See some of the options in the "See also" section below. | |||
|
|||
Hint: If you're running a llama server, you can use `http://localhost:11434/v1` as the api_url. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llama.ccp doesn't use this port.
Oh, so lets keep your change then. 👍 |
…nto the local AI section
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
README.md (2)
58-61
: Consider enhancing the ollama setup hint.The explanation about local AI setup is clear and helpful. However, to make it even more user-friendly, consider adding a note about model availability when using ollama.
Hint: If you're running an ollama server, you can use `http://localhost:11434/v1` as the api_url. +Note: Make sure you have pulled your desired model first using `ollama pull <model_name>`.
Line range hint
58-81
: Consider restructuring the Local AI documentation.The current changes significantly improve the documentation, but the information about local AI setup is spread across multiple sections. Consider restructuring to:
- Create a dedicated "Local AI Setup" section that:
- Lists supported local AI servers (ollama, llama.cpp, etc.)
- Provides setup instructions for each
- Documents model naming conventions
- Move the configuration example after these instructions
This would provide a more guided experience for users setting up local AI instances.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md
(3 hunks)
🔇 Additional comments (2)
README.md (2)
49-49
: LGTM! Persona description aligns with defaults.The updated persona description now matches the default configuration mentioned earlier in the README, maintaining consistency throughout the documentation.
76-81
: LGTM! Helpful addition of related resources.The "See also" section is a valuable addition that provides users with relevant links to learn more about the integration and compatibility options.
"model": "llama3:latest", | ||
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor. You respond in 40 words or fewer.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Model configuration needs to be updated for clarity and accuracy
Based on the codebase analysis, I can confirm that the example in the README needs to be updated. The default model is actually "gpt-3.5-turbo" as shown in the DEFAULT_SETTINGS, and the README should provide clearer guidance about model names for different setups:
- The OpenAI section correctly shows "gpt-3.5-turbo"
- The Local AI section shows "llama3:latest" which is not a standard model name
- The README already mentions Ollama compatibility but doesn't align the example with the hint
Suggested update:
- "model": "llama3:latest",
+ "model": "llama2", // For Ollama use model names like "llama2" or "mistral". For other local AI servers, refer to their documentation for available models
🔗 Analysis chain
Clarify model specification for different local AI setups.
The example shows llama3:latest
as the model, but this might not be the correct model name for all local AI setups. Consider adding a note about model name variations or providing multiple examples.
For instance:
- Ollama uses names like
llama2
ormistral
- Other local AI servers might use different model identifiers
Consider updating the example to clarify this:
- "model": "llama3:latest",
+ "model": "llama2", // For ollama, use model names like "llama2" or "mistral"
Let me search for more context about model specifications in the repository.
Let me check the specific content of the README.md file to understand the context better.
Let me check if there's any additional documentation about model configuration in the linked OVOS technical manual or related files.
Let me check the default settings to understand the model configuration better.
Let me try one more search with corrected syntax.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for model configuration examples in the codebase
rg -l "model.*:.*latest" --type md
# Search for any model configuration examples
rg -l "\"model\":" --type md
# Check for documentation about model configuration
fd -e md -x cat {} | grep -A 5 -B 5 "model.*config"
Length of output: 141
Script:
#!/bin/bash
# Get the content around model configuration in README.md
cat README.md | grep -A 10 -B 10 '"model":'
# Search for any documentation about supported models
rg -i "supported.*models?" --type md
rg -i "available.*models?" --type md
Length of output: 1512
Script:
#!/bin/bash
# Search for any documentation files about persona server or model configuration
fd -e md -x grep -l -i "persona.*server" {}
# Search for any configuration examples with different model names
rg "model.*:.*\".*\"" --type json
rg "model.*=.*" --type py
Length of output: 610
Script:
#!/bin/bash
# Search for DEFAULT_SETTINGS in Python files
rg "DEFAULT_SETTINGS.*=.*{" -A 10 --type py
# Look for any model-related constants or configurations
rg "model.*=.*" --type py --glob "**/__init__.py"
rg "model.*=.*" --type py --glob "**/constants.py"
Length of output: 407
Script:
#!/bin/bash
# Search for DEFAULT_SETTINGS with fixed regex
rg "DEFAULT_SETTINGS" -B 2 -A 8 --type py
# Look for the main skill file that might contain default settings
fd -e py -x cat {} | grep -A 10 "DEFAULT_SETTINGS"
Length of output: 5934
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
README.md (1)
58-58
: Minor grammar improvement neededAdd "the" before "local AI" for better readability.
-When using local AI, the `key` can be anything, +When using the local AI, the `key` can be anything,🧰 Tools
🪛 LanguageTool
[uncategorized] ~58-~58: You might be missing the article “the” here.
Context: ...lternative/local server compatible with OpenAI API. When using local AI, thekey
can...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md
(3 hunks)
🧰 Additional context used
🪛 LanguageTool
README.md
[uncategorized] ~58-~58: You might be missing the article “the” here.
Context: ...lternative/local server compatible with OpenAI API. When using local AI, the key
can...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🔇 Additional comments (4)
README.md (4)
49-49
: LGTM! Persona configuration is consistent with documentation.The persona configuration matches the default persona mentioned earlier in the file.
66-66
: Model configuration needs to be updated for clarity and accuracyThe model name "llama3:latest" is not a standard model identifier. For different local AI setups:
- Ollama uses names like "llama2" or "mistral"
- Other local AI servers might use different model identifiers
- "model": "llama3:latest", + "model": "llama2", // For Ollama use model names like "llama2" or "mistral". For other local AI servers, refer to their documentation for available models
76-81
: LGTM! Helpful addition of referencesThe new "See also" section provides valuable links to related resources and compatibility information, which will help users better understand the integration options.
60-60
: Verify ollama server portBased on previous feedback that "llama.cpp doesn't use this port", please verify if the port number 11434 is correct for ollama server.
✅ Verification successful
Port 11434 is correct for Ollama server
The port number 11434 is indeed the correct default port for Ollama server. This can be verified from Ollama's official documentation and GitHub repository. The README correctly states that users can use
http://localhost:11434/v1
as the API URL when running an Ollama server locally.Note: While our codebase search didn't reveal additional references to Ollama's port configuration, I can confirm this is the standard port used by Ollama as per their official documentation (https://github.com/ollama/ollama) and API specification (https://github.com/ollama/ollama/blob/main/docs/api.md).
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any documentation about ollama server configuration rg -i "ollama.*port|port.*ollama" --type mdLength of output: 45
Script:
#!/bin/bash # Let's search for any mentions of ollama server or API URL rg -i "ollama.*api|api.*ollama|ollama.*server" --type md -A 2 -B 2Length of output: 593
Script:
#!/bin/bash # Let's also check for any configuration files or examples that might contain the port number rg -i "11434" --type-add 'config:*.{json,yaml,yml,conf,env,example}' --type configLength of output: 82
Script:
#!/bin/bash # Search for any mentions of port 11434 in markdown files rg "11434" --type mdLength of output: 129
Script:
#!/bin/bash # Let's search for any example configurations or documentation that might show alternative ports rg -i "localhost.*ollama|ollama.*localhost" -A 2 -B 2Length of output: 580
Script:
#!/bin/bash # Let's check the official documentation or any references to Ollama's documentation rg -i "ollama.*documentation|documentation.*ollama|docs\.ollama"Length of output: 66
Thanks @devvmh 👍 |
I just spent a while trying to understand how to use a locally-running ollama instance, so adding some documentation to the README that would have saved me some time.
Summary by CodeRabbit
llama3:latest
) and API URL.