-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
执行python startup.py -a 报错 #2937
Comments
🤖 嘿,@sdqwesx1!很高兴再次看到你。遇到什么麻烦了吗? 从错误日志来看,问题出在加载 首先,我建议您检查一下模型文件的路径是否正确。在您的配置中,模型路径是 其次,如果模型文件的路径没有问题,那么可能是模型文件本身有问题。您可以尝试重新下载模型文件,看看是否能解决问题。 最后,如果以上方法都不能解决问题,那么可能是您的环境中缺少某些必要的依赖。您可以尝试更新您的环境,确保所有的依赖都已经正确安装。 希望以上建议能对您有所帮助。如果问题仍然存在,或者有其他问题,欢迎继续提问。 SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot,感谢你的回复,我已经找到了问题所在,但是现在又出现了新的问题.我在执行python3 startup.py时模型加载正常,但是出现了报错 |
🤖 你遇到的错误 'streamlit'模块在startup.py文件的'run_webui'函数中被用来运行Langchain-Chatchat应用的web用户界面。 你可以通过安装'streamlit'模块来解决这个错误。你可以在你的终端中运行以下命令来安装: pip install streamlit 运行这个命令后,再试一次运行python3 startup.py命令。如果你还是遇到同样的错误,可能是安装'streamlit'的Python环境和运行startup.py脚本的不是同一个。在这种情况下,你可能需要在运行脚本时指定安装了'streamlit'的Python解释器的完整路径。 参考资料: 'streamlit'模块在Langchain-Chatchat应用中被用来创建和管理应用的web用户界面(WebUI)。它被用来设置页面配置,创建侧边栏,显示图片,和管理应用的不同页面。 以下是'streamlit'在代码中的一些具体用途:
总的来说,'streamlit'被用来构建和管理Langchain-Chatchat应用的web界面。 参考资料: SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
这个是你环境没有装streamlit吧 |
是的是的,这里是streamlit没有添加到系统路径下.我删掉重新下载就好了 |
这个是啥。。。我都没遇到过 |
@sdqwesx1 运行 |
另外我很好奇这样的项目你们是如何debug的呢, 各种package各种model里面变量很多的 |
我没有遇到你这个错误,所以我没法复现,关于debug的话,每个模块单独debug,你要是合并进来就很难debug了,异步的 |
@zRzRzRzRzRzRzR 感谢, 我follow up一下, 是git lfs pull的model不完整. |
git lfs intall |
问题描述 / Problem Description
使用的模型是chatglm3-6b
执行python startup.py -a时报错:
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-6.2.0-39-generic-x86_64-with-glibc2.35.
python版本:3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b'] @ cuda
{'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/home/ubuntu/app/LangChain-model/chatglm3-6b',
'model_path_exists': True,
'port': 20002}
当前Embbedings模型: bge-large-zh-v1.5 @ cuda
==============================Langchain-Chatchat Configuration==============================
2024-02-05 17:25:07,872 - startup.py[line:655] - INFO: 正在启动服务:
2024-02-05 17:25:07,872 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/ubuntu/app/Langchain-Chatchat/logs
/home/ubuntu/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃
warn_deprecated(
2024-02-05 17:25:10 | ERROR | stderr | INFO: Started server process [381433]
2024-02-05 17:25:10 | ERROR | stderr | INFO: Waiting for application startup.
2024-02-05 17:25:10 | ERROR | stderr | INFO: Application startup complete.
2024-02-05 17:25:10 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
2024-02-05 17:25:11 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 43d0a0c9 ...
2024-02-05 17:25:11 | ERROR | stderr | Process model_worker - chatglm3-6b:
2024-02-05 17:25:11 | ERROR | stderr | Traceback (most recent call last):
2024-02-05 17:25:11 | ERROR | stderr | File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
2024-02-05 17:25:11 | ERROR | stderr | self.run()
2024-02-05 17:25:11 | ERROR | stderr | File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
2024-02-05 17:25:11 | ERROR | stderr | self._target(*self._args, **self._kwargs)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/app/Langchain-Chatchat/startup.py", line 389, in run_model_worker
2024-02-05 17:25:11 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/app/Langchain-Chatchat/startup.py", line 217, in create_model_worker_app
2024-02-05 17:25:11 | ERROR | stderr | worker = ModelWorker(
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/fastchat/serve/model_worker.py", line 77, in init
2024-02-05 17:25:11 | ERROR | stderr | self.model, self.tokenizer = load_model(
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 348, in load_model
2024-02-05 17:25:11 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 816, in load_model
2024-02-05 17:25:11 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 801, in from_pretrained
2024-02-05 17:25:11 | ERROR | stderr | return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained
2024-02-05 17:25:11 | ERROR | stderr | return cls._from_pretrained(
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2261, in _from_pretrained
2024-02-05 17:25:11 | ERROR | stderr | tokenizer = cls(*init_inputs, **init_kwargs)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.cache/huggingface/modules/transformers_modules/chatglm3-6b/tokenization_chatglm.py", line 109, in init
2024-02-05 17:25:11 | ERROR | stderr | self.tokenizer = SPTokenizer(vocab_file)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.cache/huggingface/modules/transformers_modules/chatglm3-6b/tokenization_chatglm.py", line 18, in init
2024-02-05 17:25:11 | ERROR | stderr | self.sp_model = SentencePieceProcessor(model_file=model_path)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/sentencepiece/init.py", line 447, in Init
2024-02-05 17:25:11 | ERROR | stderr | self.Load(model_file=model_file, model_proto=model_proto)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/sentencepiece/init.py", line 905, in Load
2024-02-05 17:25:11 | ERROR | stderr | return self.LoadFromFile(model_file)
2024-02-05 17:25:11 | ERROR | stderr | File "/home/ubuntu/.local/lib/python3.10/site-packages/sentencepiece/init.py", line 310, in LoadFromFile
2024-02-05 17:25:11 | ERROR | stderr | return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
2024-02-05 17:25:11 | ERROR | stderr | RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
The text was updated successfully, but these errors were encountered: