You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now I have a Python script that sends requests to this server using images in a particular folder. I can specify the total number of requests that need to be completed and the concurrency with which requests are sent.
When I specify concurrency as 1, everything seems to run fine and the vLLM server output looks like this:
INFO 02-19 17:45:38 engine.py:270] Added request chatcmpl-f999587430c74f35b54a8776a8738206.
INFO: 127.0.0.1:47262 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO 02-19 17:45:40 metrics.py:467] Avg prompt throughput: 20.9 tokens/s, Avg generation throughput: 4.1 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 02-19 17:45:40 logger.py:37] Received request chatcmpl-b1110b43342547b59ccde5040ef50cf2: prompt: '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nDoes this image contain sexually suggestive or provocative content? Just say yes or no.<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=10, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 02-19 17:45:40 engine.py:270] Added request chatcmpl-b1110b43342547b59ccde5040ef50cf2.
INFO: 127.0.0.1:47274 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO 02-19 17:45:41 logger.py:37] Received request chatcmpl-f12842d229124c8aadcecb85596ed043: prompt: '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nDoes this image contain sexually suggestive or provocative content? Just say yes or no.<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=10, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 02-19 17:45:41 engine.py:270] Added request chatcmpl-f12842d229124c8aadcecb85596ed043.
INFO: 127.0.0.1:47276 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO 02-19 17:45:42 logger.py:37] Received request chatcmpl-b755a73bbe214cbea59145874cfbeddf: prompt: '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nDoes this image contain sexually suggestive or provocative content? Just say yes or no.<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=10, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 02-19 17:45:42 engine.py:270] Added request chatcmpl-b755a73bbe214cbea59145874cfbeddf.
INFO: 127.0.0.1:47288 - "POST /v1/chat/completions HTTP/1.1" 200 OK
However, when I make the concurrency to 2, first few requests go through and then the MQLLMEngine crashes, giving this output:
INFO 02-19 17:47:46 engine.py:270] Added request chatcmpl-b320da69e6614d9d968d70c838a62bbe.
INFO: 127.0.0.1:33578 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:33584 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO 02-19 17:47:48 logger.py:37] Received request chatcmpl-c7fabc22bf4f4888ab26c65d51cec8e1: prompt: '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nDoes this image contain sexually suggestive or provocative content? Just say yes or no.<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=10, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 02-19 17:47:48 logger.py:37] Received request chatcmpl-9997c92d2aa04412b56c7154d5fdc156: prompt: '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nDoes this image contain sexually suggestive or provocative content? Just say yes or no.<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=10, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 02-19 17:47:48 engine.py:270] Added request chatcmpl-c7fabc22bf4f4888ab26c65d51cec8e1.
INFO 02-19 17:47:48 engine.py:270] Added request chatcmpl-9997c92d2aa04412b56c7154d5fdc156.
INFO: 127.0.0.1:33600 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:33610 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO 02-19 17:47:49 logger.py:37] Received request chatcmpl-6e65512fe55742598110dc6439fa4cb7: prompt: '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nDoes this image contain sexually suggestive or provocative content? Just say yes or no.<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=10, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 02-19 17:47:49 logger.py:37] Received request chatcmpl-31efce9bad0643979e4dff9163dbde73: prompt: '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nDoes this image contain sexually suggestive or provocative content? Just say yes or no.<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=1.0, top_p=1.0, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=10, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 02-19 17:47:49 engine.py:270] Added request chatcmpl-6e65512fe55742598110dc6439fa4cb7.
INFO 02-19 17:47:49 engine.py:270] Added request chatcmpl-31efce9bad0643979e4dff9163dbde73.
CRITICAL 02-19 17:47:49 launcher.py:99] MQLLMEngine is already dead, terminating server process
INFO: 127.0.0.1:32912 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
CRITICAL 02-19 17:47:49 launcher.py:99] MQLLMEngine is already dead, terminating server process
INFO: 127.0.0.1:32926 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
The traceback looks like this in the vLLM server:
ERROR 02-19 17:47:49 engine.py:136] AssertionError()
ERROR 02-19 17:47:49 engine.py:136] Traceback (most recent call last):
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/engine/multiprocessing/engine.py", line 134, in start
ERROR 02-19 17:47:49 engine.py:136] self.run_engine_loop()
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/engine/multiprocessing/engine.py", line 197, in run_engine_loop
ERROR 02-19 17:47:49 engine.py:136] request_outputs = self.engine_step()
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/engine/multiprocessing/engine.py", line 215, in engine_step
ERROR 02-19 17:47:49 engine.py:136] raise e
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/engine/multiprocessing/engine.py", line 206, in engine_step
ERROR 02-19 17:47:49 engine.py:136] returnself.engine.step()
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/engine/llm_engine.py", line 1359, in step
ERROR 02-19 17:47:49 engine.py:136] outputs = self.model_executor.execute_model(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/executor/executor_base.py", line 113, in execute_model
ERROR 02-19 17:47:49 engine.py:136] output = self.collective_rpc("execute_model",
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/executor/uniproc_executor.py", line 51, in collective_rpc
ERROR 02-19 17:47:49 engine.py:136] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/utils.py", line 2288, in run_method
ERROR 02-19 17:47:49 engine.py:136] return func(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/worker/hpu_worker.py", line 281, in execute_model
ERROR 02-19 17:47:49 engine.py:136] output = LocalOrDistributedWorkerBase.execute_model(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/worker/worker_base.py", line 402, in execute_model
ERROR 02-19 17:47:49 engine.py:136] output = self.model_runner.execute_model(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 02-19 17:47:49 engine.py:136] return func(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/worker/hpu_enc_dec_model_runner.py", line 658, in execute_model
ERROR 02-19 17:47:49 engine.py:136] hidden_states = self.model.forward(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/worker/hpu_enc_dec_model_runner.py", line 162, in forward
ERROR 02-19 17:47:49 engine.py:136] hidden_states = self.model(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 02-19 17:47:49 engine.py:136] return self._call_impl(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 02-19 17:47:49 engine.py:136] return forward_call(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/model_executor/models/mllama.py", line 1419, in forward
ERROR 02-19 17:47:49 engine.py:136] outputs = self.language_model(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 02-19 17:47:49 engine.py:136] return self._call_impl(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1847, in _call_impl
ERROR 02-19 17:47:49 engine.py:136] returninner()
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1793, in inner
ERROR 02-19 17:47:49 engine.py:136] result = forward_call(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/habana_frameworks/torch/hpu/graphs.py", line 736, in forward
ERROR 02-19 17:47:49 engine.py:136] return wrapped_hpugraph_forward(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/habana_frameworks/torch/hpu/graphs.py", line 586, in wrapped_hpugraph_forward
ERROR 02-19 17:47:49 engine.py:136] return orig_fwd(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/model_executor/models/mllama.py", line 1107, in forward
ERROR 02-19 17:47:49 engine.py:136] hidden_states = self.model(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 02-19 17:47:49 engine.py:136] return self._call_impl(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1847, in _call_impl
ERROR 02-19 17:47:49 engine.py:136] returninner()
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1793, in inner
ERROR 02-19 17:47:49 engine.py:136] result = forward_call(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/model_executor/models/mllama.py", line 1043, in forward
ERROR 02-19 17:47:49 engine.py:136] hidden_states = decoder_layer(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 02-19 17:47:49 engine.py:136] return self._call_impl(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1847, in _call_impl
ERROR 02-19 17:47:49 engine.py:136] returninner()
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1793, in inner
ERROR 02-19 17:47:49 engine.py:136] result = forward_call(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/model_executor/models/mllama.py", line 950, in forward
ERROR 02-19 17:47:49 engine.py:136] hidden_states = self.cross_attn(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 02-19 17:47:49 engine.py:136] return self._call_impl(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1847, in _call_impl
ERROR 02-19 17:47:49 engine.py:136] returninner()
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1793, in inner
ERROR 02-19 17:47:49 engine.py:136] result = forward_call(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/model_executor/models/mllama.py", line 820, in forward
ERROR 02-19 17:47:49 engine.py:136] output = self.attn(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 02-19 17:47:49 engine.py:136] return self._call_impl(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1847, in _call_impl
ERROR 02-19 17:47:49 engine.py:136] returninner()
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1793, in inner
ERROR 02-19 17:47:49 engine.py:136] result = forward_call(*args, **kwargs)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/attention/layer.py", line 159, in forward
ERROR 02-19 17:47:49 engine.py:136] return unified_attention(query, key, value, self.layer_name)
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/attention/layer.py", line 246, in unified_attention
ERROR 02-19 17:47:49 engine.py:136] return self.impl.forward(query, key, value, kv_cache, attn_metadata,
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/attention/backends/hpu_attn.py", line 188, in forward
ERROR 02-19 17:47:49 engine.py:136] return self.forward_encoder_decoder(
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] File "/root/akarx/vllm-fork/vllm/attention/backends/hpu_attn.py", line 323, in forward_encoder_decoder
ERROR 02-19 17:47:49 engine.py:136] assert batched_kv_tokens % batch_size == 0
ERROR 02-19 17:47:49 engine.py:136] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 02-19 17:47:49 engine.py:136] AssertionError
CRITICAL 02-19 17:47:49 launcher.py:99] MQLLMEngine is already dead, terminating server process
INFO: 127.0.0.1:32928 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
CRITICAL 02-19 17:47:49 launcher.py:99] MQLLMEngine is already dead, terminating server process
INFO: 127.0.0.1:32932 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
This concurrency issue didn't exist last month. I had tested this around 24th Jan 2025, and everything worked that time. This same issue is seen with LLava as well. Please assist, thank you!
I am on the v1.20.0 branch currently.
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Your current environment
The output of `python collect_env.py`
🐛 Describe the bug
When I try to use VLMs like llava-hf/llava-v1.6-mistral-7b-hf or Llama-3.2-11B-Vision-Instruct, the vLLM server starts without issues. Command used:
Now I have a Python script that sends requests to this server using images in a particular folder. I can specify the total number of requests that need to be completed and the concurrency with which requests are sent.
When I specify concurrency as 1, everything seems to run fine and the vLLM server output looks like this:
However, when I make the concurrency to 2, first few requests go through and then the MQLLMEngine crashes, giving this output:
The traceback looks like this in the vLLM server:
This concurrency issue didn't exist last month. I had tested this around 24th Jan 2025, and everything worked that time. This same issue is seen with LLava as well. Please assist, thank you!
I am on the v1.20.0 branch currently.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: