Skip to content

Commit

Permalink
Add LlamaCpp to readme and documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
1runeberg committed Aug 20, 2024
1 parent 1b89591 commit fb6ac11
Show file tree
Hide file tree
Showing 2 changed files with 46 additions and 1 deletion.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,9 @@ In a nutshell, ConfiChat caters to users who value transparent control over thei

- **Cross-Platform Compatibility**: Developed in Flutter, ConfiChat runs on Windows, Linux, Android, MacOS, and iOS

- **Local Model (Ollama) Support**: [Ollama](https://ollama.com) offers a range of lightweight, open-source local models, such as [Llama by Meta](https://ai.meta.com/llama/), [Gemma by Google](https://ai.google.dev/gemma), and [Llava](https://github.com/haotian-liu/LLaVA) for multimodal/image support. These models are designed to run efficiently even on machines with limited resources. By focusing on local processing, Ollama enables the use of powerful LLMs without relying on cloud services, enhancing both privacy and offline capabilities. For an up-to-date list of available models, refer to the [Ollama library](https://ollama.com/library).
- **Local Model Support (Ollama and LlamaCpp)**: [Ollama](https://ollama.com) & [LlamaCpp](https://github.com/ggerganov/llama.cpp) both offer a range of lightweight, open-source local models, such as [Llama by Meta](https://ai.meta.com/llama/), [Gemma by Google](https://ai.google.dev/gemma), and [Llava](https://github.com/haotian-liu/LLaVA) for multimodal/image support. These models are designed to run efficiently even on machines with limited resources. By focusing on local processing, Ollama enables the use of powerful LLMs without relying on cloud services, enhancing both privacy and offline capabilities.

For an up-to-date list of available models, refer to the [Ollama library](https://ollama.com/library) or [Hugging Face](https://huggingface.co/) for gguf model files for LlamaCpp.

- **OpenAI Integration**: Seamlessly integrates with [OpenAI](https://openai.com) to provide advanced language model capabilities using your [own API key](https://platform.openai.com/docs/quickstart). Please note that while the API does not store conversations like ChatGPT does, OpenAI retains input data for abuse monitoring purposes. You can review their latest [data retention and security policies](https://openai.com/enterprise-privacy/). In particular, check the "How does OpenAI handle data retention and monitoring for API usage?" in their FAQ (https://openai.com/enterprise-privacy/).

Expand Down
43 changes: 43 additions & 0 deletions docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,11 @@ Get up and running with **ConfiChat** by following this guide. Whether you're us
- [4. Get Your OpenAI API Key](#4-get-your-openai-api-key)
- [5. Configure ConfiChat with Your API Key](#5-configure-confichat-with-your-api-key)
- [Additional Resources](#additional-resources-2)
4. [Using ConfiChat with LlamaCpp](#using-confichat-with-llamacpp)
- [1. Install LlamaCpp](#1-install-llamacpp)
- [2. Run LlamaCpp Server](#2-run-llamacpp-server)
- [3. Set Up ConfiChat](#3-set-up-confichat)
- [Additional Resources](#additional-resources-3)

---

Expand Down Expand Up @@ -162,3 +167,41 @@ Follow the instructions in the [Configure ConfiChat with Your API Key](#3-config
### Additional Resources

For more detailed instructions and troubleshooting, please visit the [Ollama documentation](https://ollama.com/docs), the [OpenAI documentation](https://platform.openai.com/docs), and the [ConfiChat repository](https://github.com/your-repository/ConfiChat).


## Using ConfiChat with LlamaCp

Set up **LlamaCpp** with **ConfiChat** by following these steps. This section will guide you through installing LlamaCpp, running the server, and configuring ConfiChat.

### 1. Install LlamaCpp

To use LlamaCpp, you first need to install it:

- **macOS**:
```bash
brew install llamacpp
```

- **Windows**:
Download the binaries from the [LlamaCpp GitHub releases page](https://github.com/ggerganov/llama.cpp/releases) and follow the installation instructions.

- **Linux**:
```bash
sudo apt-get install llamacpp
```

### 2. Run LlamaCpp Server
After installing LlamaCpp, you'll need to run the LlamaCpp server with your desired model:
```
llama-server -m /path/to/your/model --port 8080
```

This command will start the LlamaCpp server, which ConfiChat can connect to for processing language model queries.

### 3. Set Up ConfiChat

Follow the instructions in the [Set Up ConfiChat](#3-set-up-confichat) section above.

### Additional Resources

For more detailed instructions and troubleshooting, please visit the [LlamaCpp documentation](https://github.com/ggerganov/llama.cpp) and the [ConfiChat repository](https://github.com/your-repository/ConfiChat).

0 comments on commit fb6ac11

Please sign in to comment.