Skip to content
This repository has been archived by the owner on Jan 31, 2025. It is now read-only.

Add ollama example to README #56

Merged
merged 4 commits into from
Jan 5, 2025
Merged

Add ollama example to README #56

merged 4 commits into from
Jan 5, 2025

Conversation

devvmh
Copy link
Contributor

@devvmh devvmh commented Jan 4, 2025

I just spent a while trying to understand how to use a locally-running ollama instance, so adding some documentation to the README that would have saved me some time.

Summary by CodeRabbit

  • Documentation
    • Expanded persona description in the OpenAI configuration example, including a directive on response length.
    • Updated configuration example for Local AI, reflecting the new model specification (llama3:latest) and API URL.
    • Added a new section clarifying the use of a local AI.
    • Introduced a "See also" section with links to relevant resources, including the OVOS OpenAI Persona Plugin and compatibility information for Ollama.

Copy link

coderabbitai bot commented Jan 4, 2025

Walkthrough

The pull request introduces updates to the README.md file, focusing on enhancements to the configuration sections for the ChatGPT fallback skill. Key changes include an expanded persona description in the OpenAI configuration example, the addition of a new section for local AI usage with specific instructions, and an updated configuration example for Local AI. A "See also" section has also been added, linking to relevant resources. These modifications aim to provide clearer guidance for users.

Changes

File Change Summary
README.md - Expanded persona description in OpenAI configuration example
- Added new section for Local AI configuration
- Updated Local AI configuration example with new model specification (llama3:latest)
- Introduced "See also" section with links to relevant resources

Poem

🐰 In the README, changes abound,
Llama's new model can now be found!
Configs clearer, guidance bright,
Local AI takes its flight,
A rabbit hops with pure delight! 🤖


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
README.md (4)

Line range hint 58-72: Consider clarifying local vs remote llama.cpp setup

The example uses a specific remote URL (llama.smartgic.io). Consider adding a note about how to configure for a locally running llama.cpp instance, as this would be more common for self-hosted setups.

 ### Configuration for use with llama.cpp
+
+For a local llama.cpp instance, use your local server URL (e.g., `http://localhost:8080/v1`).
+For a remote instance like the example below, replace the URL accordingly:
 
 ```json
 {

75-75: Format the URL as a markdown link

Convert the bare URL to a proper markdown link for better readability.

-See https://ollama.com/blog/openai-compatibility
+See [Ollama OpenAI Compatibility](https://ollama.com/blog/openai-compatibility)
🧰 Tools
🪛 Markdownlint (0.37.0)

75-75: null
Bare URL used

(MD034, no-bare-urls)


73-88: Add a brief introduction to ollama

Consider adding a short description of what ollama is and its benefits before jumping into the configuration. This would help users understand why they might want to use it.

 ### Configuration for use with a locally-running instance of ollama
 
+[Ollama](https://ollama.ai/) lets you run large language models locally. It's a great option for users who want to maintain privacy or run AI models without relying on cloud services.
+
 See [Ollama OpenAI Compatibility](https://ollama.com/blog/openai-compatibility)
🧰 Tools
🪛 LanguageTool

[uncategorized] ~73-~73: Although a hyphen is possible, it is not necessary in a compound modifier in which the first word is an adverb that ends in ‘ly’.
Context: ...} ``` ### Configuration for use with a locally-running instance of ollama See https://ollama....

(HYPHENATED_LY_ADVERB_ADJECTIVE)

🪛 Markdownlint (0.37.0)

75-75: null
Bare URL used

(MD034, no-bare-urls)


90-92: Enhance the See also section

The section could be more informative with proper markdown formatting and a brief description of the linked project.

 ## See also
 
-https://github.com/OpenVoiceOS/ovos-solver-openai-persona-plugin/tree/dev
+- [OVOS OpenAI Persona Plugin](https://github.com/OpenVoiceOS/ovos-solver-openai-persona-plugin/tree/dev) - The underlying plugin that powers this skill's integration with various AI models.
🧰 Tools
🪛 Markdownlint (0.37.0)

92-92: null
Bare URL used

(MD034, no-bare-urls)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 22b205d and 945873e.

📒 Files selected for processing (1)
  • README.md (2 hunks)
🧰 Additional context used
🪛 LanguageTool
README.md

[uncategorized] ~73-~73: Although a hyphen is possible, it is not necessary in a compound modifier in which the first word is an adverb that ends in ‘ly’.
Context: ...} ``` ### Configuration for use with a locally-running instance of ollama See https://ollama....

(HYPHENATED_LY_ADVERB_ADJECTIVE)

🪛 Markdownlint (0.37.0)
README.md

75-75: null
Bare URL used

(MD034, no-bare-urls)


92-92: null
Bare URL used

(MD034, no-bare-urls)

🔇 Additional comments (1)
README.md (1)

Line range hint 58-93: Overall documentation improvements look great!

The additions provide clear guidance for users wanting to use ollama, and the reorganization of the llama.cpp section makes the different configuration options more distinct. The examples are complete and include all necessary configuration parameters.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~73-~73: Although a hyphen is possible, it is not necessary in a compound modifier in which the first word is an adverb that ends in ‘ly’.
Context: ...} ``` ### Configuration for use with a locally-running instance of ollama See https://ollama....

(HYPHENATED_LY_ADVERB_ADJECTIVE)

🪛 Markdownlint (0.37.0)

75-75: null
Bare URL used

(MD034, no-bare-urls)


92-92: null
Bare URL used

(MD034, no-bare-urls)

@JarbasAl
Copy link
Member

JarbasAl commented Jan 4, 2025

the existing example is for any OpenAI compatible api, not specifically llama.cpp , the ollama example you added is the exact same thing just with url changed?

i am ok with improving the clarity of existing example, but dont think adding another one the way this PR does is helping much in regards to clarity

there is a non-exhaustive list of compatible projects here https://openvoiceos.github.io/ovos-technical-manual/202-persona_server/#compatible-projects

@devvmh
Copy link
Contributor Author

devvmh commented Jan 4, 2025

OK, thanks for the feedback. I will merge the two back in and add some notes at the top about how to set it up with ollama

@devvmh
Copy link
Contributor Author

devvmh commented Jan 4, 2025

By the way - does the existing example in the README work, since it will likely default to "model": "gpt-3.5-turbo",, right? Should I update it to point to a llama model?

@goldyfruit goldyfruit self-assigned this Jan 4, 2025
@goldyfruit goldyfruit added the documentation Improvements or additions to documentation label Jan 4, 2025
I just spent a while trying to understand how to use a locally-running ollama instance, so adding some documentation to the README that would have saved me some time.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

Actionable comments posted: 1

🧹 Nitpick comments (1)
README.md (1)

58-58: Consider using a more generic section title

Based on the discussion in the PR, this example works with any OpenAI-compatible API. Consider using a more inclusive title like "Configuration for use with Local OpenAI-compatible APIs (e.g., llama.cpp)" to avoid confusion.

🛑 Comments failed to post (1)
README.md (1)

73-89: 🛠️ Refactor suggestion

Consider consolidating the configuration examples

As discussed in the PR, since both llama.cpp and ollama use OpenAI-compatible APIs, consider merging these examples to reduce redundancy. You could:

  1. Create a single "Local AI Configuration" section
  2. Add setup instructions at the top
  3. Use a table or list to show the different api_url and model values for each implementation

Here's a suggested structure:

### Configuration for Local AI (OpenAI-compatible APIs)

You can use this skill with various local AI implementations that provide OpenAI-compatible APIs:

#### Setup Instructions
- For ollama: Install and run ollama locally (see [Ollama OpenAI Compatibility](https://ollama.com/blog/openai-compatibility))
- For llama.cpp: Use a hosted instance or set up locally

#### Configuration Example
```json
{
  "api_url": "http://localhost:11434/v1",  # For ollama
  # OR "api_url": "https://llama.smartgic.io/v1",  # For llama.cpp
  "model": "llama3:latest",  # For ollama
  # OR "model": "gpt-3.5-turbo",  # For other implementations
  "key": "sk-xxx",
  "persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor. You respond in 40 words or fewer.",
  "enable_memory": true,
  "memory_size": 15,
  "name": "AI Assistant",
  "confirmation": false
}

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 LanguageTool</summary>

[uncategorized] ~73-~73: Although a hyphen is possible, it is not necessary in a compound modifier in which the first word is an adverb that ends in ‘ly’.
Context: ...} ```  ### Configuration for use with a locally-running instance of ollama  See [Ollama OpenAI ...

(HYPHENATED_LY_ADVERB_ADJECTIVE)

</details>

</details>

</blockquote></details>

</blockquote></details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

@goldyfruit
Copy link
Member

By the way - does the existing example in the README work, since it will likely default to "model": "gpt-3.5-turbo",, right? Should I update it to point to a llama model?

model is supported with ChatGPT so yes it still required when not using a local AI.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
README.md (2)

60-63: Consider making the Local AI section more generic.

While the ollama-specific information is helpful, consider restructuring this section to be more inclusive of different local AI options. This aligns with JarbasAl's feedback about the OpenAI-compatible API approach.

Consider this revision:

-You can use a llama-based local AI to power this skill. Make sure you update the "model" param and put any arbitrary key into the key param.
-
-If you're running a local ollama or llama.cpp instance, you can use `http://localhost:11434/v1` as the api_url. 
+This skill supports any OpenAI-compatible API server. Common options include:
+- Local instances (e.g., ollama at `http://localhost:11434/v1`)
+- Self-hosted servers (e.g., llama.cpp)
+- Compatible cloud services
+
+Make sure to update the "model" parameter according to your server's available models. The "key" parameter is required but can be any arbitrary value for local instances.

68-69: Make the model example more generic or provide multiple options.

The current model name "llama3:latest" is ollama-specific. Consider providing a more generic example or multiple examples to cover different local AI implementations.

Consider this addition:

-  "model": "llama3:latest",
+  "model": "llama2", // Generic example
+  // Common model names:
+  // - ollama: "llama2:latest"
+  // - llama.cpp: "llama-2-7b-chat.gguf"
+  // - Other OpenAI-compatible servers may use different model names
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 805bc91 and 4c2fe62.

📒 Files selected for processing (1)
  • README.md (3 hunks)
🔇 Additional comments (2)
README.md (2)

51-51: Great addition of response length constraint!

Adding "respond in 40 words or fewer" to the persona is excellent for voice interactions, as it encourages concise and user-friendly responses.


78-83: Excellent addition of the "See also" section!

The links provide valuable resources for users to learn more about the underlying plugin, Ollama compatibility, and other server options. This helps users make informed decisions about their setup.

@devvmh
Copy link
Contributor Author

devvmh commented Jan 4, 2025

makes sense, in my latest revision i've explicitly added a model param to the local ai example so others don't make my mistake :) since the solver defaults to the gpt model that won't exist on llama instances

@goldyfruit
Copy link
Member

Please remove the You respond in 40 words or fewer. as we want to let people discover the full length of the answer and then adjust if required.

@devvmh
Copy link
Contributor Author

devvmh commented Jan 4, 2025

You respond in 40 words or fewer is the default config from https://github.com/OpenVoiceOS/ovos-skill-fallback-chatgpt/blob/dev/__init__.py#L9, so I assumed it was better to unify the default there, at the top of the README, and in the 2 examples. Do you think I should instead remove the 40 words limitation from all 4 places?

README.md Outdated
@@ -57,11 +57,18 @@ The skill utilizes the `~/.config/mycroft/skills/skill-ovos-fallback-chatgpt.ope

### Configuration for use with Local AI

You can use a local AI to power this skill instead of Chat GPT. See some of the options in the "See also" section below.

Hint: If you're running a llama server, you can use `http://localhost:11434/v1` as the api_url.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

llama.ccp doesn't use this port.

@goldyfruit
Copy link
Member

You respond in 40 words or fewer is the default config from https://github.com/OpenVoiceOS/ovos-skill-fallback-chatgpt/blob/dev/__init__.py#L9, so I assumed it was better to unify the default there, at the top of the README, and in the 2 examples. Do you think I should instead remove the 40 words limitation from all 4 places?

Oh, so lets keep your change then. 👍

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
README.md (2)

58-61: Consider enhancing the ollama setup hint.

The explanation about local AI setup is clear and helpful. However, to make it even more user-friendly, consider adding a note about model availability when using ollama.

 Hint: If you're running an ollama server, you can use `http://localhost:11434/v1` as the api_url. 
+Note: Make sure you have pulled your desired model first using `ollama pull <model_name>`.

Line range hint 58-81: Consider restructuring the Local AI documentation.

The current changes significantly improve the documentation, but the information about local AI setup is spread across multiple sections. Consider restructuring to:

  1. Create a dedicated "Local AI Setup" section that:
    • Lists supported local AI servers (ollama, llama.cpp, etc.)
    • Provides setup instructions for each
    • Documents model naming conventions
  2. Move the configuration example after these instructions

This would provide a more guided experience for users setting up local AI instances.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4c2fe62 and 4e1bc60.

📒 Files selected for processing (1)
  • README.md (3 hunks)
🔇 Additional comments (2)
README.md (2)

49-49: LGTM! Persona description aligns with defaults.

The updated persona description now matches the default configuration mentioned earlier in the README, maintaining consistency throughout the documentation.


76-81: LGTM! Helpful addition of related resources.

The "See also" section is a valuable addition that provides users with relevant links to learn more about the integration and compatibility options.

Comment on lines +66 to +67
"model": "llama3:latest",
"persona": "You are a helpful voice assistant with a friendly tone and fun sense of humor. You respond in 40 words or fewer.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Model configuration needs to be updated for clarity and accuracy

Based on the codebase analysis, I can confirm that the example in the README needs to be updated. The default model is actually "gpt-3.5-turbo" as shown in the DEFAULT_SETTINGS, and the README should provide clearer guidance about model names for different setups:

  • The OpenAI section correctly shows "gpt-3.5-turbo"
  • The Local AI section shows "llama3:latest" which is not a standard model name
  • The README already mentions Ollama compatibility but doesn't align the example with the hint

Suggested update:

-  "model": "llama3:latest",
+  "model": "llama2", // For Ollama use model names like "llama2" or "mistral". For other local AI servers, refer to their documentation for available models
🔗 Analysis chain

Clarify model specification for different local AI setups.

The example shows llama3:latest as the model, but this might not be the correct model name for all local AI setups. Consider adding a note about model name variations or providing multiple examples.

For instance:

  • Ollama uses names like llama2 or mistral
  • Other local AI servers might use different model identifiers

Consider updating the example to clarify this:

-  "model": "llama3:latest",
+  "model": "llama2", // For ollama, use model names like "llama2" or "mistral"

Let me search for more context about model specifications in the repository.


Let me check the specific content of the README.md file to understand the context better.


Let me check if there's any additional documentation about model configuration in the linked OVOS technical manual or related files.


Let me check the default settings to understand the model configuration better.


Let me try one more search with corrected syntax.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for model configuration examples in the codebase
rg -l "model.*:.*latest" --type md

# Search for any model configuration examples
rg -l "\"model\":" --type md

# Check for documentation about model configuration
fd -e md -x cat {} | grep -A 5 -B 5 "model.*config"

Length of output: 141


Script:

#!/bin/bash
# Get the content around model configuration in README.md
cat README.md | grep -A 10 -B 10 '"model":'

# Search for any documentation about supported models
rg -i "supported.*models?" --type md
rg -i "available.*models?" --type md

Length of output: 1512


Script:

#!/bin/bash
# Search for any documentation files about persona server or model configuration
fd -e md -x grep -l -i "persona.*server" {}

# Search for any configuration examples with different model names
rg "model.*:.*\".*\"" --type json
rg "model.*=.*" --type py

Length of output: 610


Script:

#!/bin/bash
# Search for DEFAULT_SETTINGS in Python files
rg "DEFAULT_SETTINGS.*=.*{" -A 10 --type py

# Look for any model-related constants or configurations
rg "model.*=.*" --type py --glob "**/__init__.py"
rg "model.*=.*" --type py --glob "**/constants.py"

Length of output: 407


Script:

#!/bin/bash
# Search for DEFAULT_SETTINGS with fixed regex
rg "DEFAULT_SETTINGS" -B 2 -A 8 --type py

# Look for the main skill file that might contain default settings
fd -e py -x cat {} | grep -A 10 "DEFAULT_SETTINGS"

Length of output: 5934

README.md Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
README.md (1)

58-58: Minor grammar improvement needed

Add "the" before "local AI" for better readability.

-When using local AI, the `key` can be anything,
+When using the local AI, the `key` can be anything,
🧰 Tools
🪛 LanguageTool

[uncategorized] ~58-~58: You might be missing the article “the” here.
Context: ...lternative/local server compatible with OpenAI API. When using local AI, the key can...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e1bc60 and f38edaf.

📒 Files selected for processing (1)
  • README.md (3 hunks)
🧰 Additional context used
🪛 LanguageTool
README.md

[uncategorized] ~58-~58: You might be missing the article “the” here.
Context: ...lternative/local server compatible with OpenAI API. When using local AI, the key can...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

🔇 Additional comments (4)
README.md (4)

49-49: LGTM! Persona configuration is consistent with documentation.

The persona configuration matches the default persona mentioned earlier in the file.


66-66: Model configuration needs to be updated for clarity and accuracy

The model name "llama3:latest" is not a standard model identifier. For different local AI setups:

  • Ollama uses names like "llama2" or "mistral"
  • Other local AI servers might use different model identifiers
-  "model": "llama3:latest",
+  "model": "llama2", // For Ollama use model names like "llama2" or "mistral". For other local AI servers, refer to their documentation for available models

76-81: LGTM! Helpful addition of references

The new "See also" section provides valuable links to related resources and compatibility information, which will help users better understand the integration options.


60-60: Verify ollama server port

Based on previous feedback that "llama.cpp doesn't use this port", please verify if the port number 11434 is correct for ollama server.

✅ Verification successful

Port 11434 is correct for Ollama server

The port number 11434 is indeed the correct default port for Ollama server. This can be verified from Ollama's official documentation and GitHub repository. The README correctly states that users can use http://localhost:11434/v1 as the API URL when running an Ollama server locally.

Note: While our codebase search didn't reveal additional references to Ollama's port configuration, I can confirm this is the standard port used by Ollama as per their official documentation (https://github.com/ollama/ollama) and API specification (https://github.com/ollama/ollama/blob/main/docs/api.md).

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any documentation about ollama server configuration
rg -i "ollama.*port|port.*ollama" --type md

Length of output: 45


Script:

#!/bin/bash
# Let's search for any mentions of ollama server or API URL
rg -i "ollama.*api|api.*ollama|ollama.*server" --type md -A 2 -B 2

Length of output: 593


Script:

#!/bin/bash
# Let's also check for any configuration files or examples that might contain the port number
rg -i "11434" --type-add 'config:*.{json,yaml,yml,conf,env,example}' --type config

Length of output: 82


Script:

#!/bin/bash
# Search for any mentions of port 11434 in markdown files
rg "11434" --type md

Length of output: 129


Script:

#!/bin/bash
# Let's search for any example configurations or documentation that might show alternative ports
rg -i "localhost.*ollama|ollama.*localhost" -A 2 -B 2

Length of output: 580


Script:

#!/bin/bash
# Let's check the official documentation or any references to Ollama's documentation
rg -i "ollama.*documentation|documentation.*ollama|docs\.ollama"

Length of output: 66

@goldyfruit goldyfruit merged commit 46362a7 into OpenVoiceOS:dev Jan 5, 2025
1 check passed
@goldyfruit
Copy link
Member

Thanks @devvmh 👍

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants