Skip to content

Commit

Permalink
chore: update docker-compose names
Browse files Browse the repository at this point in the history
  • Loading branch information
jaluma committed Aug 5, 2024
1 parent 5a5b212 commit e54fac0
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 9 deletions.
11 changes: 6 additions & 5 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,12 @@ services:
PGPT_MODE: ollama
PGPT_EMBED_MODE: ollama
PGPT_OLLAMA_API_BASE: http://ollama:11434
HF_TOKEN: ${HF_TOKEN:-}
profiles:
- ""
- ollama
- ollama-cpu
- ollama-cuda
- ollama-host
- ollama-api

# Private-GPT service for the local mode
# This service builds from a local Dockerfile and runs the application in local mode.
Expand All @@ -45,7 +46,7 @@ services:
PGPT_PROFILES: local
HF_TOKEN: ${HF_TOKEN}
profiles:
- local
- llamacpp

#-----------------------------------
#---- Ollama services --------------
Expand All @@ -72,9 +73,9 @@ services:
- "host.docker.internal:host-gateway"
profiles:
- ""
- ollama
- ollama-cpu
- ollama-cuda
- ollama-host
- ollama-api

# Ollama service for the CPU mode
ollama-cpu:
Expand Down
8 changes: 4 additions & 4 deletions fern/docs/pages/quickstart/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ docker-compose up
```
or with a specific profile:
```sh
docker-compose --profile ollama up
docker-compose --profile ollama-cpu up
```

#### 2. Ollama Nvidia CUDA
Expand All @@ -47,7 +47,7 @@ To start the services with CUDA support using pre-built images, run:
docker-compose --profile ollama-cuda up
```

#### 3. Ollama Host
#### 3. Ollama External API

**Description:**
This profile is designed for running PrivateGPT using Ollama installed on the host machine. This setup is particularly useful for MacOS users, as Docker does not yet support Metal GPU.
Expand All @@ -62,7 +62,7 @@ OLLAMA_HOST=0.0.0.0 ollama serve
```
To start the services with the host configuration using pre-built images, run:
```sh
docker-compose --profile ollama-host up
docker-compose --profile ollama-api up
```

### Fully Local Setups
Expand All @@ -78,7 +78,7 @@ A **Hugging Face Token (HF_TOKEN)** is required for accessing Hugging Face model
**Run:**
Start the services with your Hugging Face token using pre-built images:
```sh
HF_TOKEN=<your_hf_token> docker-compose up --profile local
HF_TOKEN=<your_hf_token> docker-compose up --profile llamacpp
```
Replace `<your_hf_token>` with your actual Hugging Face token.

Expand Down

0 comments on commit e54fac0

Please sign in to comment.