Configuring local Ollama instance for use with LLM honeypots. #1741
-
Hi all, for context I am using T-Pot with Debian 12.9.0, an i9-13900K, RTX 3080 10 GB, 128 GBs of DDR5 RAM and a 4 TB SSD. I have installed Ollama as per the Read Me states via I would like to use the LLM model llama3.3 with Gallah and phi4 with Beelzebub. I have confirmed that both models are downloaded correctly and work when I interact with them via terminal prompts. I have configured my .env file as so...
...and my docker-compose.yml with the following:
However, when interacting with the Beelzebub on port 2022 as per my docker-compose file, for every ssh input I get Per the discussion within the Issue #1729, I had changed from initially using Any advice or guidance would be appreciated. Thanks all. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
|
Beta Was this translation helpful? Give feedback.
Thank you for the reply @t3chn0m4g3, after reading Ollama's faq.md, particularly Setting environment variables on Linux and adding the line
Environment="OLLAMA_HOST=0.0.0.0"
within theollama.service
file, I got both honeypots communicating with Ollama.I have shared relevant snippets of my
.env
file, should anyone else have a similar issue running Ollama locally.Obviously, replace
192.168.0.1
with the inet IP address displayed in the output of the commandip addr …