Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
chore(deps): update container image docker.io/localai/localai to v2.5…
….0 by renovate (truecharts#17044) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.4.1-cublas-cuda11-ffmpeg-core` -> `v2.5.0-cublas-cuda11-ffmpeg-core` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.5.0`](https://github.com/mudler/LocalAI/releases/tag/v2.5.0) [Compare Source](https://github.com/mudler/LocalAI/compare/v2.4.1...v2.5.0) <!-- Release notes generated using configuration in .github/release.yml at master --> ##### What's Changed This release adds more embedded models, and shrink image sizes. You can run now `phi-2` ( see [here](https://localai.io/basics/getting_started/#running-popular-models-one-click) for the full list ) locally by starting localai with: docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core phi-2 LocalAI accepts now as argument a list of short-hands models and/or URLs pointing to valid yaml file. A popular way to host those files are Github gists. For instance, you can run `llava`, by starting `local-ai` with: ```bash docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/llava.yaml ``` ##### Exciting New Features 🎉 - feat: more embedded models, coqui fixes, add model usage and description by [@&truecharts#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1556](https://github.com/mudler/LocalAI/pull/1556) ##### 👒 Dependencies - deps(conda): use transformers-env with vllm,exllama(2) by [@&truecharts#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1554](https://github.com/mudler/LocalAI/pull/1554) - deps(conda): use transformers environment with autogptq by [@&truecharts#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/1555](https://github.com/mudler/LocalAI/pull/1555) - ⬆️ Update ggerganov/llama.cpp by [@&truecharts#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1558](https://github.com/mudler/LocalAI/pull/1558) ##### Other Changes - ⬆️ Update docs version mudler/LocalAI by [@&truecharts#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/1557](https://github.com/mudler/LocalAI/pull/1557) **Full Changelog**: mudler/LocalAI@v2.4.1...v2.5.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - "before 10pm on monday" in timezone Europe/Amsterdam, Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4xMjcuMCIsInVwZGF0ZWRJblZlciI6IjM3LjEyNy4wIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIn0=-->
- Loading branch information