Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用lora和glm2微调合并后,报错。 #2988

Closed
hOVOzw opened this issue Feb 17, 2024 · 2 comments
Closed

使用lora和glm2微调合并后,报错。 #2988

hOVOzw opened this issue Feb 17, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@hOVOzw
Copy link

hOVOzw commented Feb 17, 2024

使用lora和glm2微调合并后,报错。
使用llama factory对模型微调后导出模型,已经在model_config.py修改了配置。

**预期的结果 /正常打开webui

**实际结果 /
2024-02-18 01:39:12 | ERROR | stderr | Traceback (most recent call last):
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\multiprocessing\process.py", line 315, in _bootstrap
2024-02-18 01:39:12 | ERROR | stderr | self.run()
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\multiprocessing\process.py", line 108, in run
2024-02-18 01:39:12 | ERROR | stderr | self._target(*self._args, **self._kwargs)
2024-02-18 01:39:12 | ERROR | stderr | File "D:\langchain_Chatchat\startup.py", line 389, in run_model_worker
2024-02-18 01:39:12 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs)
2024-02-18 01:39:12 | ERROR | stderr | File "D:\langchain_Chatchat\startup.py", line 217, in create_model_worker_app
2024-02-18 01:39:12 | ERROR | stderr | worker = ModelWorker(
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\fastchat\serve\model_worker.py", line 77, in init
2024-02-18 01:39:12 | ERROR | stderr | self.model, self.tokenizer = load_model(
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\fastchat\model\model_adapter.py", line 348, in load_model
2024-02-18 01:39:12 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\fastchat\model\model_adapter.py", line 823, in load_model
2024-02-18 01:39:12 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 801, in from_pretrained
2024-02-18 01:39:12 | ERROR | stderr | return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\transformers\tokenization_utils_base.py", line 2029, in from_pretrained
2024-02-18 01:39:12 | ERROR | stderr | return cls._from_pretrained(
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\transformers\tokenization_utils_base.py", line 2261, in _from_pretrained
2024-02-18 01:39:12 | ERROR | stderr | tokenizer = cls(*init_inputs, **init_kwargs)
2024-02-18 01:39:12 | ERROR | stderr | File "C:\Users\Administrator.cache\huggingface\modules\transformers_modules\chatglm2-lora\tokenization_chatglm.py", line 78, in init
2024-02-18 01:39:12 | ERROR | stderr | super().init(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs)
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\transformers\tokenization_utils.py", line 363, in init
2024-02-18 01:39:12 | ERROR | stderr | super().init(**kwargs)
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\transformers\tokenization_utils_base.py", line 1603, in init
2024-02-18 01:39:12 | ERROR | stderr | super().init(**kwargs)
2024-02-18 01:39:12 | ERROR | stderr | File "D:\ANACONDA\envs\chatchat\lib\site-packages\transformers\tokenization_utils_base.py", line 861, in init
2024-02-18 01:39:12 | ERROR | stderr | setattr(self, key, value)
2024-02-18 01:39:12 | ERROR | stderr | AttributeError: can't set attribute
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
Loading checkpoint shards: 14%|████████▏ | 1/7 [00:00<00:05, 1.00it/s]
Loading checkpoint shards: 29%|████████████████▎ | 2/7 [00:01<00:04, 1.12it/s]
Loading checkpoint shards: 43%|████████████████████████▍ | 3/7 [00:02<00:03, 1.30it/s]
Loading checkpoint shards: 57%|████████████████████████████████▌ | 4/7 [00:03<00:02, 1.45it/s]
Loading checkpoint shards: 71%|████████████████████████████████████████▋ | 5/7 [00:03<00:01, 1.38it/s]
Loading checkpoint shards: 86%|████████████████████████████████████████████████▊ | 6/7 [00:04<00:00, 1.28it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.43it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.34it/s]
2024-02-18 01:39:17 | ERROR | stderr |
2024-02-18 01:39:20 | INFO | model_worker | Register to controller

@hOVOzw hOVOzw added the bug Something isn't working label Feb 17, 2024
Copy link

dosubot bot commented Feb 17, 2024

🤖

你好,@hOVOzw!很高兴再次见到你。希望你近期一切都好。

根据您提供的错误日志,问题似乎出在加载模型的过程中。错误信息显示在尝试设置属性时出现了问题,这可能是由于模型或分词器的版本不兼容,或者模型文件的路径不正确。

在解决这个问题时,您可以参考以下几点:

  1. 确保MODEL_PATH字典中的模型路径正确。如果您使用的是本地模型,路径应指向您机器上模型的正确位置。如果您使用的是Hugging Face的模型,确保模型名称正确。

  2. 检查您使用的模型是否与Langchain-Chatchat应用程序兼容。并非所有模型都可能与应用程序兼容。

  3. 如果您使用GPU运行模型(EMBEDDING_DEVICELLM_DEVICE设置为"cuda"),确保您有足够的GPU内存。一些模型需要大量的GPU内存,如果不足可能会导致错误。

  4. 检查ONLINE_LLM_MODEL中在线模型的API密钥是否正确,API是否正常工作。

  5. 如果您将它们作为代理模型使用,确保您使用的模型包含在SUPPORT_AGENT_MODEL列表中。

如果错误仍然存在,提供确切的错误消息或您所做修改的更多详细信息将会有所帮助。

此外,我在Langchain-Chatchat的存储库中找到了一些类似的已解决问题,可能对您有所帮助:

  1. 执行python startup.py -a 报错
  2. [BUG] AttributeError: 'ChatGLMTokenizer' object has no attribute 'sp_tokenizer'
  3. python startup.py -a出错

这些问题的解决方案可能对您的问题有所启示。

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@zRzRzRzRzRzRzR
Copy link
Collaborator

建议使用peft加载,分别挂载微调和原始的模型,wiki中有相关内容

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants