Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change the LLM with myselt model, like 'deepseek-v3-241226'? #439

Open
MyQiongbao opened this issue Mar 24, 2025 · 2 comments
Open

Comments

@MyQiongbao
Copy link

I have a llm names 'deepseek-v3-241226' with deepseek-v3. I edit the project and add the enum in deepseek.

Predefined model names for common providers

model_names = {
"anthropic": ["claude-3-5-sonnet-20241022", "claude-3-5-sonnet-20240620", "claude-3-opus-20240229"],
"openai": ["gpt-4o", "gpt-4", "gpt-3.5-turbo", "o3-mini"],
"deepseek": ["deepseek-chat", "deepseek-reasoner", "deepseek-v3-241226"],
"google": ["gemini-2.0-flash", "gemini-2.0-flash-thinking-exp", "gemini-1.5-flash-latest", "gemini-1.5-flash-8b-latest", "gemini-2.0-flash-thinking-exp-01-21", "gemini-2.0-pro-exp-02-05"],
"ollama": ["qwen2.5:7b", "qwen2.5:14b", "qwen2.5:32b", "qwen2.5-coder:14b", "qwen2.5-coder:32b", "llama2:7b", "deepseek-r1:14b", "deepseek-r1:32b"],
"azure_openai": ["gpt-4o", "gpt-4", "gpt-3.5-turbo"],
"mistral": ["pixtral-large-latest", "mistral-large-latest", "mistral-small-latest", "ministral-8b-latest"],
"alibaba": ["qwen-plus", "qwen-max", "qwen-turbo", "qwen-long"],
"moonshot": ["moonshot-v1-32k-vision-preview", "moonshot-v1-8k-vision-preview"],
}

then, I open 127.0.0.1:7788 and select llm provider with 'deepseek', model name with 'deepseek-v3-241226' , after runing agent, with error :
INFO [src.agent.custom_agent] 开始调用模型 2025-03-24 17:31:35.902042
ERROR [agent] ❌ Result failed 1/3 times:
404 page not found
Stacktrace:
Traceback (most recent call last):
File "D:\work\ai\web-ui\src\agent\custom_agent.py", line 350, in step
raise e
File "D:\work\ai\web-ui\src\agent\custom_agent.py", line 338, in step
model_output = await self.get_next_action(input_messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python313\site-packages\browser_use\utils.py", line 36, in wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\work\ai\web-ui\src\agent\custom_agent.py", line 230, in get_next_action
ai_message = self.llm.invoke(input_messages)
File "C:\Users\Administrator\AppData\Roaming\Python\Python313\site-packages\langchain_core\language_models\chat_models.py", line 307, in invoke
self.generate_prompt(
~~~~~~~~~~~~~~~~~~~~^
[self._convert_input(input)],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<6 lines>...
**kwargs,

how to add our LLM ? could anyone give me some suggusets?

@warmshao
Copy link
Collaborator

use deepseek-chat

@MyQiongbao
Copy link
Author

My LLM is not the official Deepseek model. The API endpoint URL for invocation is of the format https://xxx.xxxxx.com/api/v3/chat/completions, and the model name is deepseek-v3-241226. How do I properly invoke it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants