-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
千问14b-int4模型加载失败 #2993
Comments
🤖 嗨,@leizhu1989!很高兴再次见到你。希望你一切都好! 您的错误信息 "PackageNotFoundError: No package metadata was found for auto-gptq" 表示您正在尝试加载的包或模型 "auto-gptq" 在您的环境中未找到。 然而,根据提供的上下文,您似乎正在尝试在 Langchain-Chatchat 中加载 "Qwen-14B-Chat" 模型。在配置文件的 "llm_model" 字典中确实列出了 "Qwen-14B-Chat" 模型,这意味着它应该可以使用。 "llm_model": {
...
"Qwen-14B-Chat": "Qwen/Qwen-14B-Chat",
...
} 错误可能是由以下原因之一导致的:
如果错误仍然存在,请提供更多的上下文或您尝试加载模型的具体代码片段以便进一步协助。 另外,我在Langchain-Chatchat的问题库中找到了一些类似的已解决问题,可能对您有所帮助:
希望这些信息对您有所帮助。如果还有其他问题,欢迎继续提问。 Sources
|
装对应依赖就行了 |
是什么依赖?可以告知对应的名称吗? |
看到了 def validate_environment(self, *args, **kwargs):
gptq_supports_cpu = version.parse(importlib.metadata.version("auto-gptq")) > version.parse("0.4.2")
if not gptq_supports_cpu and not torch.cuda.is_available():
raise RuntimeError("GPU is required to quantize or run quantize model.")
elif not (is_optimum_available() and is_auto_gptq_available()):
raise ImportError(
"Loading a GPTQ quantized model requires optimum (`pip install optimum`) and auto-gptq library (`pip install auto-gptq`)"
)
elif version.parse(importlib.metadata.version("auto_gptq")) < version.parse("0.4.2"):
raise ImportError(
"You need a version of auto_gptq >= 0.4.2 to use GPTQ: `pip install --upgrade auto-gptq`"
) 所以只要 |
问题描述 / Problem Description
CUDA_VISIBLE_DEVICES=2,3 python startup.py -a --model-name Qwen-14B-Chat-Int4 运行startup.py 报错
环境信息 / Environment Information
附加信息 / Additional Information
错误信息:
2024-02-18 17:21:16 | ERROR | stderr | Process model_worker - Qwen-14B-Chat-Int4:
2024-02-18 17:21:16 | ERROR | stderr | Traceback (most recent call last):
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/importlib/metadata/init.py", line 563, in from_name
2024-02-18 17:21:16 | ERROR | stderr | return next(cls.discover(name=name))
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | StopIteration
2024-02-18 17:21:16 | ERROR | stderr |
2024-02-18 17:21:16 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-02-18 17:21:16 | ERROR | stderr |
2024-02-18 17:21:16 | ERROR | stderr | Traceback (most recent call last):
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2024-02-18 17:21:16 | ERROR | stderr | self.run()
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/multiprocessing/process.py", line 108, in run
2024-02-18 17:21:16 | ERROR | stderr | self._target(*self._args, **self._kwargs)
2024-02-18 17:21:16 | ERROR | stderr | File "/home/lei/NLP/Langchain-Chatchat-master/startup.py", line 387, in run_model_worker
2024-02-18 17:21:16 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs)
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/lei/NLP/Langchain-Chatchat-master/startup.py", line 215, in create_model_worker_app
2024-02-18 17:21:16 | ERROR | stderr | worker = ModelWorker(
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 77, in init
2024-02-18 17:21:16 | ERROR | stderr | self.model, self.tokenizer = load_model(
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 323, in load_model
2024-02-18 17:21:16 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 1583, in load_model
2024-02-18 17:21:16 | ERROR | stderr | model = AutoModelForCausalLM.from_pretrained(
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained
2024-02-18 17:21:16 | ERROR | stderr | return model_class.from_pretrained(
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3013, in from_pretrained
2024-02-18 17:21:16 | ERROR | stderr | gptq_supports_cpu = version.parse(importlib.metadata.version("auto-gptq")) > version.parse("0.4.2")
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/importlib/metadata/init.py", line 1008, in version
2024-02-18 17:21:16 | ERROR | stderr | return distribution(distribution_name).version
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/importlib/metadata/init.py", line 981, in distribution
2024-02-18 17:21:16 | ERROR | stderr | return Distribution.from_name(distribution_name)
2024-02-18 17:21:16 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-18 17:21:16 | ERROR | stderr | File "/home/anaconda3/envs/Langchain_chat/lib/python3.11/importlib/metadata/init.py", line 565, in from_name
2024-02-18 17:21:16 | ERROR | stderr | raise PackageNotFoundError(name)
2024-02-18 17:21:16 | ERROR | stderr | importlib.metadata.PackageNotFoundError: No package metadata was found for auto-gptq
请问大佬这是什么原因
The text was updated successfully, but these errors were encountered: