Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: ValueError: Model architectures ['Qwen2AudioForConditionalGeneration'] failed to be inspected. Please check the logs for more details. #12072

Closed
1 task done
umie0128 opened this issue Jan 15, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@umie0128
Copy link

Your current environment

@DarkLight1337
我测试了vllm的 v0.6.4 v0.6.5 v0.6.6 都无法启动qwen2-Aduio
使用的环境都是官方的 docker pull vllm/vllm-openai 对应版本
查看了相关issues 降低Numpy到v1.x 但是官方的Numpy就是1.26.4 并没有效果

Model Input Dumps

Qwen2-Audio-7B-Instruct

🐛 Describe the bug

v0.6.4的报错是:

WARNING 01-14 23:14:06 config.py:1865] Casting torch.bfloat16 to torch.float16.
ERROR 01-14 23:14:12 registry.py:297] Error in inspecting model architecture 'Qwen2AudioForConditionalGeneration'
ERROR 01-14 23:14:12 registry.py:297] Traceback (most recent call last):
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 468, in _run_in_subprocess
ERROR 01-14 23:14:12 registry.py:297] returned.check_returncode()
ERROR 01-14 23:14:12 registry.py:297] File "/usr/lib/python3.12/subprocess.py", line 502, in check_returncode
ERROR 01-14 23:14:12 registry.py:297] raise CalledProcessError(self.returncode, self.args, self.stdout,
ERROR 01-14 23:14:12 registry.py:297] subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'vllm.model_executor.models.registry']' returned non-zero exit status 1.
ERROR 01-14 23:14:12 registry.py:297]
ERROR 01-14 23:14:12 registry.py:297] The above exception was the direct cause of the following exception:
ERROR 01-14 23:14:12 registry.py:297]
ERROR 01-14 23:14:12 registry.py:297] Traceback (most recent call last):
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 295, in _try_inspect_model_cls
ERROR 01-14 23:14:12 registry.py:297] return model.inspect_model_cls()
ERROR 01-14 23:14:12 registry.py:297] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 257, in inspect_model_cls
ERROR 01-14 23:14:12 registry.py:297] return _run_in_subprocess(
ERROR 01-14 23:14:12 registry.py:297] ^^^^^^^^^^^^^^^^^^^
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 471, in _run_in_subprocess
ERROR 01-14 23:14:12 registry.py:297] raise RuntimeError(f"Error raised in subprocess:\n"
ERROR 01-14 23:14:12 registry.py:297] RuntimeError: Error raised in subprocess:
ERROR 01-14 23:14:12 registry.py:297] :128: RuntimeWarning: 'vllm.model_executor.models.registry' found in sys.modules after import of package 'vllm.model_executor.models', but prior to execution of 'vllm.model_executor.models.registry'; this may result in unpredictable behaviour
ERROR 01-14 23:14:12 registry.py:297] Traceback (most recent call last):
ERROR 01-14 23:14:12 registry.py:297] File "", line 198, in _run_module_as_main
ERROR 01-14 23:14:12 registry.py:297] File "", line 88, in _run_code
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 492, in
ERROR 01-14 23:14:12 registry.py:297] _run()
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 485, in _run
ERROR 01-14 23:14:12 registry.py:297] result = fn()
ERROR 01-14 23:14:12 registry.py:297] ^^^^
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 258, in
ERROR 01-14 23:14:12 registry.py:297] lambda: _ModelInfo.from_model_cls(self.load_model_cls()))
ERROR 01-14 23:14:12 registry.py:297] ^^^^^^^^^^^^^^^^^^^^^
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 261, in load_model_cls
ERROR 01-14 23:14:12 registry.py:297] mod = importlib.import_module(self.module_name)
ERROR 01-14 23:14:12 registry.py:297] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-14 23:14:12 registry.py:297] File "/usr/lib/python3.12/importlib/init.py", line 90, in import_module
ERROR 01-14 23:14:12 registry.py:297] return _bootstrap._gcd_import(name[level:], package, level)
ERROR 01-14 23:14:12 registry.py:297] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-14 23:14:12 registry.py:297] File "", line 1387, in _gcd_import
ERROR 01-14 23:14:12 registry.py:297] File "", line 1360, in _find_and_load
ERROR 01-14 23:14:12 registry.py:297] File "", line 1331, in _find_and_load_unlocked
ERROR 01-14 23:14:12 registry.py:297] File "", line 935, in _load_unlocked
ERROR 01-14 23:14:12 registry.py:297] File "", line 995, in exec_module
ERROR 01-14 23:14:12 registry.py:297] File "", line 488, in _call_with_frames_removed
ERROR 01-14 23:14:12 registry.py:297] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2_audio.py", line 25, in
ERROR 01-14 23:14:12 registry.py:297] import librosa
ERROR 01-14 23:14:12 registry.py:297] ModuleNotFoundError: No module named 'librosa'
ERROR 01-14 23:14:12 registry.py:297]
Traceback (most recent call last):
File "/workspace/online_server.py", line 357, in
engine = AsyncLLMEngine.from_engine_args(engine_args, usage_context=UsageContext.OPENAI_API_SERVER) # 不传默认是UsageContext.ENGINE_CONTEXT
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 683, in from_engine_args
engine_config = engine_args.create_engine_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 959, in create_engine_config
model_config = self.create_model_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py", line 891, in create_model_config
return ModelConfig(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 251, in init
self.multimodal_config = self._init_multimodal_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/config.py", line 277, in _init_multimodal_config
if ModelRegistry.is_multimodal_model(architectures):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 422, in is_multimodal_model
return self.inspect_model_cls(architectures).supports_multimodal
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 391, in inspect_model_cls
return self._raise_for_unsupported(architectures)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/registry.py", line 348, in _raise_for_unsupported
raise ValueError(
ValueError: Model architectures ['Qwen2AudioForConditionalGeneration'] failed to be inspected. Please check the logs for more details.

v0.6.6的报错是:

INFO 01-14 23:11:06 model_runner.py:1099] Loading model weights took 15.6454 GB
[rank0]: Traceback (most recent call last):
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 1588, in getattr
[rank0]: importlib.import_module(self.name)
[rank0]: File "/usr/lib/python3.12/importlib/init.py", line 90, in import_module
[rank0]: return _bootstrap._gcd_import(name[level:], package, level)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "", line 1387, in _gcd_import
[rank0]: File "", line 1360, in _find_and_load
[rank0]: File "", line 1324, in _find_and_load_unlocked
[rank0]: ModuleNotFoundError: No module named 'librosa'

[rank0]: The above exception was the direct cause of the following exception:

[rank0]: Traceback (most recent call last):
[rank0]: File "/workspace/online_server.py", line 357, in
[rank0]: engine = AsyncLLMEngine.from_engine_args(engine_args, usage_context=UsageContext.OPENAI_API_SERVER) # 不传默认是UsageContext.ENGINE_CONTEXT
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 707, in from_engine_args
[rank0]: engine = cls(
[rank0]: ^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 594, in init
[rank0]: self.engine = self._engine_class(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 267, in init
[rank0]: super().init(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 276, in init
[rank0]: self._initialize_kv_caches()
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 416, in _initialize_kv_caches
[rank0]: self.model_executor.determine_num_available_blocks())
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/executor/gpu_executor.py", line 68, in determine_num_available_blocks
[rank0]: return self.driver_worker.determine_num_available_blocks()
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 202, in determine_num_available_blocks
[rank0]: self.model_runner.profile_run()
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1291, in profile_run
[rank0]: .dummy_data_for_profiling(self.model_config,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/inputs/registry.py", line 339, in dummy_data_for_profiling
[rank0]: dummy_data = processor.get_dummy_data(seq_len, mm_counts,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/processing.py", line 858, in get_dummy_data
[rank0]: mm_inputs = self.apply(*processor_inputs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/processing.py", line 795, in apply
[rank0]: hf_inputs = self._apply_hf_processor(prompt_text, mm_items,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/processing.py", line 704, in _apply_hf_processor
[rank0]: processor_data, passthrough_data = self._get_processor_data(mm_items)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2_audio.py", line 103, in _get_processor_data
[rank0]: mm_items.resample_audios(feature_extractor.sampling_rate)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/processing.py", line 302, in resample_audios
[rank0]: audio = resample_audio(audio, orig_sr=sr, target_sr=new_sr)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/audio.py", line 41, in resample_audio
[rank0]: return librosa.resample(audio, orig_sr=orig_sr, target_sr=target_sr)
[rank0]: ^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 1593, in getattr
[rank0]: raise ImportError(msg) from exc
[rank0]: ImportError: Please install vllm[audio] for audio support

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant