Skip to content

Commit

Permalink
docs: updates
Browse files Browse the repository at this point in the history
  • Loading branch information
mudler committed Jan 6, 2024
1 parent b0fda36 commit e285810
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions docs/content/getting_started/_index.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,8 @@ You can run `local-ai` directly with a model name, and it will download the mode
| Embeddings | bert-cpp | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core bert-cpp``` |
| Embeddings | all-minilm-l6-v2 | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg all-minilm-l6-v2``` |
| Audio to Text | whisper-base | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core whisper-base``` |
| Text to Audio | rhasspy-voice-en-us-amy | ```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core rhasspy-voice-en-us-amy``` |


{{% /tab %}}
{{% tab name="GPU (CUDA 11)" %}}
Expand All @@ -169,6 +171,7 @@ You can run `local-ai` directly with a model name, and it will download the mode
| Embeddings | bert-cpp | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core bert-cpp``` |
| Embeddings | all-minilm-l6-v2 | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 all-minilm-l6-v2``` |
| Audio to Text | whisper-base | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core whisper-base``` |
| Text to Audio | rhasspy-voice-en-us-amy | ```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core rhasspy-voice-en-us-amy``` |

{{% /tab %}}

Expand Down

0 comments on commit e285810

Please sign in to comment.