Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 启动脚本文件startup.py时遇到的模型路径报错 #3066

Closed
Fripping opened this issue Feb 24, 2024 · 7 comments
Closed

[BUG] 启动脚本文件startup.py时遇到的模型路径报错 #3066

Fripping opened this issue Feb 24, 2024 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@Fripping
Copy link

问题描述 / Problem Description
在启动脚本文件startup.py加载chatglm3-6b,遇到了模型路径为空的报错

复现问题的步骤 / Steps to Reproduce

  1. 执行 'python startup.py -a'
  2. 问题出现 :
    2024-02-24 13:31:46 | ERROR | stderr | File "C:\Users\dell、.cache\huggingface\modules\transformers_modules\chatglm3-6b\tokenization_chatglm.py", line 17, in init assert os.path.isfile(model_path), model_path
    2024-02-24 13:31:46 | ERROR | stderr | AssertionError: None

是不是model.config配置文件的问题?

预期的结果 / Expected Result
描述应该出现的结果 / Describe the expected result.

实际结果 / Actual Result
描述实际发生的结果 / Describe the actual result.

环境信息 / Environment Information

  • langchain-ChatGLM 版本/commit 号:0.2.10
  • 是否使用 Docker 部署(是/否):否
  • 使用的模型(ChatGLM2-6B / Qwen-7B 等):ChatGLM3-6B
  • 使用的 Embedding 模型(moka-ai/m3e-base 等):bge-large-zh
  • 使用的向量库类型 (faiss / milvus / pg_vector 等): faiss
  • 操作系统及版本: Windows 10
  • Python 版本 / Python version: 3.10.13
  • 其他相关环境信息 / Other relevant environment information:

附加信息 / Additional Information

@Fripping Fripping added the bug Something isn't working label Feb 24, 2024
Copy link

dosubot bot commented Feb 24, 2024

🤖

嘿,@Fripping!很高兴再次见到你,希望你过得不错!

根据您提供的错误信息,这个问题可能是由于模型路径不正确或者模型文件在指定的路径下不存在。您可以通过验证 run_model_worker 函数中 kwargs 字典的 model_path 值来检查这个问题。

此外,您使用的模型文件可能与您正在使用的Langchain-Chatchat版本不兼容。请确保模型文件与Langchain-Chatchat版本0.2.10兼容。

还有一种可能是您使用的Python版本存在问题。Langchain-Chatchat可能与Python 3.10.13不完全兼容。您可以尝试使用不同的Python版本。

如果以上建议都不能解决您的问题,我建议您查看一下错误日志,获取更详细的信息。这将有助于我们更准确地确定问题所在。

另外,我在Langchain-Chatchat的问题库中找到了一些类似的已解决问题,可能对您有所帮助:

  1. [BUG] python startup.py -a执行报错卡住--模型是本地的C:\AI\models\THUDM\chatglm3-6b
  2. 执行python startup.py -a 报错
  3. 启动时报错(python startup.py -a),如下,是硬件不够吗?

希望这些信息能对您有所帮助。如果您有任何其他问题或需要更多的帮助,请随时告诉我。

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@Fripping
Copy link
Author

@dosu-bot 补充一下具体报错:
2024-02-24 13:31:46 | ERROR | stderr | Process model_worker - chatglm3-6b:
2024-02-24 13:31:46 | ERROR | stderr | Traceback (most recent call last):
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Anaconda3\envs\SciAgent\lib\multiprocessing\process.py", line 314, in _bootstrap
2024-02-24 13:31:46 | ERROR | stderr | self.run()
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Anaconda3\envs\SciAgent\lib\multiprocessing\process.py", line 108, in run
2024-02-24 13:31:46 | ERROR | stderr | self._target(*self._args, **self._kwargs)
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Pythonfiles\Langchain-Chatchat\startup.py", line 389, in run_model_worker
2024-02-24 13:31:46 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs)
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Pythonfiles\Langchain-Chatchat\startup.py", line 217, in create_model_worker_app
2024-02-24 13:31:46 | ERROR | stderr | worker = ModelWorker(
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Anaconda3\envs\SciAgent\lib\site-packages\fastchat\serve\model_worker.py", line 77, in init
2024-02-24 13:31:46 | ERROR | stderr | self.model, self.tokenizer = load_model(
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Anaconda3\envs\SciAgent\lib\site-packages\fastchat\model\model_adapter.py", line 348, in load_model
2024-02-24 13:31:46 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Anaconda3\envs\SciAgent\lib\site-packages\fastchat\model\model_adapter.py", line 816, in load_model
2024-02-24 13:31:46 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Anaconda3\envs\SciAgent\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 801, in from_pretrained
2024-02-24 13:31:46 | ERROR | stderr | return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Anaconda3\envs\SciAgent\lib\site-packages\transformers\tokenization_utils_base.py", line 2029, in from_pretrained
2024-02-24 13:31:46 | ERROR | stderr | return cls._from_pretrained(
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Anaconda3\envs\SciAgent\lib\site-packages\transformers\tokenization_utils_base.py", line 2261, in _from_pretrained
2024-02-24 13:31:46 | ERROR | stderr | tokenizer = cls(*init_inputs, **init_kwargs)
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Users\dell、.cache\huggingface\modules\transformers_modules\chatglm3-6b\tokenization_chatglm.py", line 109, in init
2024-02-24 13:31:46 | ERROR | stderr | self.tokenizer = SPTokenizer(vocab_file)
2024-02-24 13:31:46 | ERROR | stderr | File "C:\Users\dell、.cache\huggingface\modules\transformers_modules\chatglm3-6b\tokenization_chatglm.py", line 17, in init
2024-02-24 13:31:46 | ERROR | stderr | assert os.path.isfile(model_path), model_path
2024-02-24 13:31:46 | ERROR | stderr | AssertionError: None
希望能进一步推理出报错原因

@Fripping
Copy link
Author

@dosu-bot 另外run_model_maker是哪个文件的函数?

Copy link

dosubot bot commented Feb 24, 2024

🤖

Based on the error message you provided, it seems that the model_path is not properly set. This could be due to the get_model_worker_config(model_name) function not returning a dictionary with a model_path key, or the value of this key is empty.

The get_model_worker_config(model_name) function is used to get the configuration for a model worker based on the model name. This configuration includes details such as the host, port, model names, controller address, worker address, and model path. These details are used to set up and run the model worker.

You can find this function in the startup.py file, and it's imported from the server.utils module. Here is the relevant code snippet:

def run_model_worker(
        model_name: str = LLM_MODELS[0],
        controller_address: str = "",
        log_level: str = "INFO",
        q: mp.Queue = None,
        started_event: mp.Event = None,
):
    ...
    kwargs = get_model_worker_config(model_name)
    ...
    model_path = kwargs.get("model_path", "")
    kwargs["model_path"] = model_path
    ...

To fix this issue, you need to ensure that the get_model_worker_config(model_name) function returns a dictionary with a model_path key that points to the location of the model you want to load.

As for the run_model_maker function, I couldn't find it in the Langchain-Chatchat repository. Could you please provide more context or details about this function? For example, in which file or module did you find a reference to this function? This would help me provide a more accurate answer.

Regarding similar issues, I found several in the Langchain-Chatchat repository that might be helpful:

  1. 执行python startup.py -a 报错
  2. 启动python startup.py -a时,加载本地模型报路径错误,模型无法加载
  3. python startup.py -a执行报错卡住

These issues suggest checking the integrity of the model files, ensuring that all necessary dependencies are installed, and verifying that the Python interpreter is compatible with the model.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this Feb 24, 2024
@zRzRzRzRzRzRzR
Copy link
Collaborator

2024-02-24 13:31:46 | ERROR | stderr | AssertionError: None 你模型位置没对吧

@Fripping
Copy link
Author

@zRzRzRzRzRzRzR 检查过了,配置文件上的路径是没问题的

@zRzRzRzRzRzRzR
Copy link
Collaborator

\dell、.cache 这不是明显有问题么

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants