You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using my custom ebook conversion routine, which process file after file. This error happens irregular, not sure, how to narrow down to the exact source of problem. Can share my custom script, from which SOMETIMES I get the error.
2025-02-23 18:57:13,207 [ERROR] Engine background task failed
2025-02-23 18:57:13,207 [ERROR] Exception in callback _log_task_completion(error_callback=>)(<Task finishe...5713.pkl): ')>) at /home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py:45
handle: <Handle _log_task_completion(error_callback=>)(<Task finishe...5713.pkl): ')>) at /home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py:45>
Traceback (most recent call last):
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/worker/model_runner_base.py", line 116, in _wrapper
return func(*args, **kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1654, in execute_model
hidden_or_intermediate_states = model_executable(
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/auralis/models/xttsv2/components/vllm_mm_gpt.py", line 640, in forward
starting_sequence_start_ids, input_ids, positions = self._apply_op_to_seq_in_batch(input_ids,
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/auralis/models/xttsv2/components/vllm_mm_gpt.py", line 609, in _apply_op_to_seq_in_batch
assert (modified_positions >= 0).all()
AssertionError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 55, in _log_task_completion
return_value = task.result()
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 872, in run_engine_loop
result = task.result()
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 795, in engine_step
request_outputs = await self.engine.step_async(virtual_engine)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 347, in step_async
outputs = await self.model_executor.execute_model_async(
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 180, in execute_model_async
output = await make_async(self.driver_worker.execute_model
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 343, in execute_model
output = self.model_runner.execute_model(
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/worker/model_runner_base.py", line 152, in _wrapper
raise type(err)(
AssertionError: Error in model execution (input dumped to /tmp/err_execute_model_input_20250223-185713.pkl):
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 67, in _log_task_completion
raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
err_execute_model_input_20250223-185713.zip
Bug Description
I'm using my custom ebook conversion routine, which process file after file. This error happens irregular, not sure, how to narrow down to the exact source of problem. Can share my custom script, from which SOMETIMES I get the error.
2025-02-23 18:57:13,207 [ERROR] Engine background task failed
2025-02-23 18:57:13,207 [ERROR] Exception in callback _log_task_completion(error_callback=>)(<Task finishe...5713.pkl): ')>) at /home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py:45
handle: <Handle _log_task_completion(error_callback=>)(<Task finishe...5713.pkl): ')>) at /home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py:45>
Traceback (most recent call last):
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/worker/model_runner_base.py", line 116, in _wrapper
return func(*args, **kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1654, in execute_model
hidden_or_intermediate_states = model_executable(
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/auralis/models/xttsv2/components/vllm_mm_gpt.py", line 640, in forward
starting_sequence_start_ids, input_ids, positions = self._apply_op_to_seq_in_batch(input_ids,
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/auralis/models/xttsv2/components/vllm_mm_gpt.py", line 609, in _apply_op_to_seq_in_batch
assert (modified_positions >= 0).all()
AssertionError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 55, in _log_task_completion
return_value = task.result()
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 872, in run_engine_loop
result = task.result()
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 795, in engine_step
request_outputs = await self.engine.step_async(virtual_engine)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 347, in step_async
outputs = await self.model_executor.execute_model_async(
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 180, in execute_model_async
output = await make_async(self.driver_worker.execute_model
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 343, in execute_model
output = self.model_runner.execute_model(
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/worker/model_runner_base.py", line 152, in _wrapper
raise type(err)(
AssertionError: Error in model execution (input dumped to /tmp/err_execute_model_input_20250223-185713.pkl):
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/staszek/miniconda3/envs/auralis/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 67, in _log_task_completion
raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
Minimal Reproducible Example
This is part of my script:
...
Expected Behavior
[Describe what you expected to happen]
Actual Behavior
[Describe what actually happened]
Error Logs
Environment
Please run the following commands and include the output:
Possible Solutions
[If you have ideas on how to solve the issue, include them here]
Additional Information
? not sure what else :(
The text was updated successfully, but these errors were encountered: